Convergence of multipoint Pade approximants of piecewise analytic functions
Buslaev, Viktor I
2013-02-28
The behaviour as n{yields}{infinity} of multipoint Pade approximants to a function which is (piecewise) holomorphic on a union of finitely many continua is investigated. The convergence of multipoint Pade approximants is proved for a function which extends holomorphically from these continua to a union of domains whose boundaries have a certain symmetry property. An analogue of Stahl's theorem is established for two-point Pade approximants to a pair of functions, either of which is a multivalued analytic function with finitely many branch points. Bibliography: 11 titles.
Unfolding the Second Riemann sheet with Pade Approximants: hunting resonance poles
Masjuan, Pere
2011-05-23
Based on Pade Theory, a new procedure for extracting the pole mass and width of resonances is proposed. The method is systematic and provides a model-independent treatment for the prediction and the errors of the approximation.
Padé approximants and their application to scattering from fluid media.
Denis, Max; Tsui, Jing; Thompson, Charles; Chandra, Kavitha
2010-11-01
In this work, a numerical method for modeling the scattered acoustic pressure from fluid occlusions is described. The method is based on the asymptotic series expansion of the pressure expressed in terms of sound speed contrast between the host medium and entrained fluid occlusions. Padé approximants are used to extend the applicability of the result for larger values of sound speed contrast. For scattering from a circular cylinder, an improvement in convergence between the exact and numerical solutions is demonstrated. In the case of scattering from an inhomogeneous medium, a numerical solution with reduced order of Padé approximants is presented.
Asymptotic Pade Approximant Predictions: Up to Five Loops in QCD and SQCD
Samuel, Mark A.
2003-05-16
We use Asymptotic Pade Approximants (APAP's) to predict the four- and five-loop {beta} functions in QCD and N = 1 supersymmetric QCD (SQCD), as well as the quark mass anomalous dimensions in Abelian and non-Abelian gauge theories. We show how the accuracy of our previous {beta}-function predictions at the four-loop level may be further improved by using estimators weighted over negative numbers of flavours (WAPAP's). The accuracy of the improved four-loop results encourages confidence in the new five-loop {beta}-function predictions that we present. However, the WAPAP approach does not provide improved results for the anomalous mass dimension, or for Abelian theories.
PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST
PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST
NASA Technical Reports Server (NTRS)
Vepa, R.
1976-01-01
The general behavior of unsteady airloads in the frequency domain is explained. Based on this, a systematic procedure is described whereby the airloads, produced by completely arbitrary, small, time-dependent motions of a thin lifting surface in an airstream, can be predicted. This scheme employs as raw materials any of the unsteady linearized theories that have been mechanized for simple harmonic oscillations. Each desired aerodynamic transfer function is approximated by means of an appropriate Pade approximant, that is, a rational function of finite degree polynomials in the Laplace transform variable. Although these approximations have many uses, they are proving especially valuable in the design of automatic control systems intended to modify aeroelastic behavior.
A hybrid Pade-Galerkin technique for differential equations
NASA Technical Reports Server (NTRS)
Geer, James F.; Andersen, Carl M.
1993-01-01
A three-step hybrid analysis technique, which successively uses the regular perturbation expansion method, the Pade expansion method, and then a Galerkin approximation, is presented and applied to some model boundary value problems. In the first step of the method, the regular perturbation method is used to construct an approximation to the solution in the form of a finite power series in a small parameter epsilon associated with the problem. In the second step of the method, the series approximation obtained in step one is used to construct a Pade approximation in the form of a rational function in the parameter epsilon. In the third step, the various powers of epsilon which appear in the Pade approximation are replaced by new (unknown) parameters (delta(sub j)). These new parameters are determined by requiring that the residual formed by substituting the new approximation into the governing differential equation is orthogonal to each of the perturbation coordinate functions used in step one. The technique is applied to model problems involving ordinary or partial differential equations. In general, the technique appears to provide good approximations to the solution even when the perturbation and Pade approximations fail to do so. The method is discussed and topics for future investigations are indicated.
Constraints to Dark Energy Using PADE Parameterizations
NASA Astrophysics Data System (ADS)
Rezaei, M.; Malekjani, M.; Basilakos, S.; Mehrabi, A.; Mota, D. F.
2017-07-01
We put constraints on dark energy (DE) properties using PADE parameterization, and compare it to the same constraints using Chevalier-Polarski-Linder (CPL) and ΛCDM, at both the background and the perturbation levels. The DE equation of the state parameter of the models is derived following the mathematical treatment of PADE expansion. Unlike CPL parameterization, PADE approximation provides different forms of the equation of state parameter that avoid the divergence in the far future. Initially we perform a likelihood analysis in order to put constraints on the model parameters using solely background expansion data, and we find that all parameterizations are consistent with each other. Then, combining the expansion and the growth rate data, we test the viability of PADE parameterizations and compare them with CPL and ΛCDM models, respectively. Specifically, we find that the growth rate of the current PADE parameterizations is lower than ΛCDM model at low redshifts, while the differences among the models are negligible at high redshifts. In this context, we provide for the first time a growth index of linear matter perturbations in PADE cosmologies. Considering that DE is homogeneous, we recover the well-known asymptotic value of the growth index (namely {γ }∞ =\\tfrac{3({w}∞ -1)}{6{w}∞ -5}), while in the case of clustered DE, we obtain {γ }∞ ≃ \\tfrac{3{w}∞ (3{w}∞ -5)}{(6{w}∞ -5)(3{w}∞ -1)}. Finally, we generalize the growth index analysis in the case where γ is allowed to vary with redshift, and we find that the form of γ (z) in PADE parameterization extends that of the CPL and ΛCDM cosmologies, respectively.
An analytic Pade-motivated QCD coupling
Martinez, H. E.; Cvetic, G.
2010-08-04
We consider a modification of the Minimal Analytic (MA) coupling of Shirkov and Solovtsov. This modified MA (mMA) coupling reflects the desired analytic properties of the space-like observables. We show that an approximation by Dirac deltas of its discontinuity function {rho} is equivalent to a Pade(rational) approximation of the mMA coupling that keeps its analytic structure. We propose a modification to mMA that, as preliminary results indicate, could be an improvement in the evaluation of low-energy observables compared with other analytic couplings.
Sokolovski, D.; Msezane, A.Z.
2004-09-01
A semiclassical complex angular momentum theory, used to analyze atom-diatom reactive angular distributions, is applied to several well-known potential (one-particle) problems. Examples include resonance scattering, rainbow scattering, and the Eckart threshold model. Pade reconstruction of the corresponding matrix elements from the values at physical (integral) angular momenta and properties of the Pade approximants are discussed in detail.
Lin, Ying-Tsong; Collis, Jon M; Duda, Timothy F
2012-11-01
An alternating direction implicit (ADI) three-dimensional fluid parabolic equation solution method with enhanced accuracy is presented. The method uses a square-root Helmholtz operator splitting algorithm that retains cross-multiplied operator terms that have been previously neglected. With these higher-order cross terms, the valid angular range of the parabolic equation solution is improved. The method is tested for accuracy against an image solution in an idealized wedge problem. Computational efficiency improvements resulting from the ADI discretization are also discussed.
NASA Astrophysics Data System (ADS)
Chishtie, Farrukh Ahmed
Pade approximants (PA) have been widely applied in practically all areas of physics. This thesis focuses on developing PA as tools for both perturbative and non-perturbative quantum field theory (QFT). In perturbative QFT, we systematically estimate higher (unknown) loop terms via the asymptotic formula devised by Samuel et al. This algorithm, generally denoted as the asymptotic Pade approximation procedure (APAP), has greatly enhanced scope when it is applied to renormalization-group-(RG-) invariant quantities. A presently-unknown higher-loop quantity can then be matched with the approximant over the entire momentum region of phenomenological interest. Furthermore, the predicted value of the RG coefficients can be compared with the RG-accessible coefficients (at the higher-loop order), allowing a clearer indication of the accuracy of the predicted RG-inaccessible term. This methodology is applied to hadronic Higgs decay rates (H → bb¯ and H → gg, both within the Standard Model and its MSSM extension), Higgs-sector cross-sections ( W+LW- L→ZL ZL ), inclusive semileptonic b → u decays (leading to reduced theoretical uncertainties in the extraction of |Vub|), QCD (Quantum Chromodynamics) correlation functions (scalar-fermionic, scalar-gluonic and vector correlators) and the QCD static potential. APAP is also applied directly to RG beta- and gamma-functions in massive φ4 theory. In non-perturbative QFT we use Pade summation methods to probe the large coupling regions of QCD. In analysing all the possible Pade-approximants to truncated beta-function for QCD, we are able to probe the singularity structure corresponding to the all orders beta-function. Noting the consistent ordering of poles and roots for such approximants (regardless of the next unknown higher-loop contribution), we conclude that these approximants are free of defective (pole) behaviour and hence we can safely draw physical conclusions from them. QCD is shown to have a flavour threshold (6
PaDe - The particle detection program
NASA Astrophysics Data System (ADS)
Ott, T.; Drolshagen, E.; Koschny, D.; Poppe, B.
2016-01-01
This paper introduces the Particle Detection program PaDe. Its aim is to analyze dust particles in the coma of the Jupiter-family comet 67P/Churyumov-Gerasimenko which were recorded by the two OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) cameras onboard the ESA spacecraft Rosetta, see e.g. Keller et al. (2007). In addition to working with the Rosetta data, the code was modified to work with images from meteors. It was tested with data recorded by the ICCs (Intensified CCD Cameras) of the CILBO-System (Canary Island Long-Baseline Observatory) on the Canary Islands; compare Koschny et al. (2013). This paper presents a new method for the position determination of the observed meteors. The PaDe program was written in Python 3.4. Its original intent is to find the trails of dust particles in space from the OSIRIS images. For that it determines the positions where the trail starts and ends. They were found using a fit following the so-called error function (Andrews, 1998) for the two edges of the profiles. The positions where the intensities fall to the half maximum were found to be the beginning and end of the particle. In the case of meteors, this method can be applied to find the leading edge of the meteor. The proposed method has the potential to increase the accuracy of the position determination of meteors dramatically. Other than the standard method of finding the photometric center, our method is not influenced by any trails or wakes behind the meteor. This paper presents first results of this ongoing work.
Random-Phase Approximation Methods
NASA Astrophysics Data System (ADS)
Chen, Guo P.; Voora, Vamsee K.; Agee, Matthew M.; Balasubramani, Sree Ganesh; Furche, Filipp
2017-05-01
Random-phase approximation (RPA) methods are rapidly emerging as cost-effective validation tools for semilocal density functional computations. We present the theoretical background of RPA in an intuitive rather than formal fashion, focusing on the physical picture of screening and simple diagrammatic analysis. A new decomposition of the RPA correlation energy into plasmonic modes leads to an appealing visualization of electron correlation in terms of charge density fluctuations. Recent developments in the areas of beyond-RPA methods, RPA correlation potentials, and efficient algorithms for RPA energy and property calculations are reviewed. The ability of RPA to approximately capture static correlation in molecules is quantified by an analysis of RPA natural occupation numbers. We illustrate the use of RPA methods in applications to small-gap systems such as open-shell d- and f-element compounds, radicals, and weakly bound complexes, where semilocal density functional results exhibit strong functional dependence.
Potential of the approximation method
Amano, K.; Maruoka, A.
1996-12-31
Developing some techniques for the approximation method, we establish precise versions of the following statements concerning lower bounds for circuits that detect cliques of size s in a graph with m vertices: For 5 {le} s {le} m/4, a monotone circuit computing CLIQUE(m, s) contains at least (1/2)1.8{sup min}({radical}s-1/2,m/(4s)) gates: If a non-monotone circuit computes CLIQUE using a {open_quotes}small{close_quotes} amount of negation, then the circuit contains an exponential number of gates. The former is proved very simply using so called bottleneck counting argument within the framework of approximation, whereas the latter is verified introducing a notion of restricting negation and generalizing the sunflower contraction.
Approximation method for the kinetic Boltzmann equation
NASA Technical Reports Server (NTRS)
Shakhov, Y. M.
1972-01-01
The further development of a method for approximating the Boltzmann equation is considered and a case of pseudo-Maxwellian molecules is treated in detail. A method of approximating the collision frequency is discussed along with a method for approximating the moments of the Boltzmann collision integral. Since the return collisions integral and the collision frequency are expressed through the distribution function moments, use of the proposed methods make it possible to reduce the Boltzmann equation to a series of approximating equations.
Differential Equations, Related Problems of Pade Approximations and Computer Applications
1988-01-01
geometric sense, like the Picard-Fuchs equations satisfied by the variation of periods, possess strong arithmetic properties (global nilpotence ...result, and the (G, C)-function conditions, one needs the definition of the p-curvature. We consider a system of matrix first order linear differential...the system (1.1) in the matrix form df f /dx = Aff ; A E M (Q(x)), one can introduce the p-curvature operators Ip, associated with the system (1.1). The
Approximate methods for equations of incompressible fluid
NASA Astrophysics Data System (ADS)
Galkin, V. A.; Dubovik, A. O.; Epifanov, A. A.
2017-02-01
Approximate methods on the basis of sequential approximations in the theory of functional solutions to systems of conservation laws is considered, including the model of dynamics of incompressible fluid. Test calculations are performed, and a comparison with exact solutions is carried out.
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Approximation methods in gravitational-radiation theory
NASA Technical Reports Server (NTRS)
Will, C. M.
1986-01-01
The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.
Approximation methods in gravitational-radiation theory
NASA Technical Reports Server (NTRS)
Will, C. M.
1986-01-01
The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.
Variational Bayesian Approximation methods for inverse problems
NASA Astrophysics Data System (ADS)
Mohammad-Djafari, Ali
2012-09-01
Variational Bayesian Approximation (VBA) methods are recent tools for effective Bayesian computations. In this paper, these tools are used for inverse problems where the prior models include hidden variables and where where the estimation of the hyper parameters has also to be addressed. In particular two specific prior models (Student-t and mixture of Gaussian models) are considered and details of the algorithms are given.
An approximate projection method for incompressible flow
NASA Astrophysics Data System (ADS)
Stevens, David E.; Chan, Stevens T.; Gresho, Phil
2002-12-01
This paper presents an approximate projection method for incompressible flows. This method is derived from Galerkin orthogonality conditions using equal-order piecewise linear elements for both velocity and pressure, hereafter Q1Q1. By combining an approximate projection for the velocities with a variational discretization of the continuum pressure Poisson equation, one eliminates the need to filter either the velocity or pressure fields as is often needed with equal-order element formulations. This variational approach extends to multiple types of elements; examples and results for triangular and quadrilateral elements are provided. This method is related to the method of Almgren et al. (SIAM J. Sci. Comput. 2000; 22: 1139-1159) and the PISO method of Issa (J. Comput. Phys. 1985; 62: 40-65). These methods use a combination of two elliptic solves, one to reduce the divergence of the velocities and another to approximate the pressure Poisson equation. Both Q1Q1 and the method of Almgren et al. solve the second Poisson equation with a weak error tolerance to achieve more computational efficiency.A Fourier analysis of Q1Q1 shows that a consistent mass matrix has a positive effect on both accuracy and mass conservation. A numerical comparison with the widely used Q1Q0 (piecewise linear velocities, piecewise constant pressures) on a periodic test case with an analytic solution verifies this analysis. Q1Q1 is shown to have comparable accuracy as Q1Q0 and good agreement with experiment for flow over an isolated cubic obstacle and dispersion of a point source in its wake.
Finite difference methods for approximating Heaviside functions
NASA Astrophysics Data System (ADS)
Towers, John D.
2009-05-01
We present a finite difference method for discretizing a Heaviside function H(u(x→)), where u is a level set function u:Rn ↦ R that is positive on a bounded region Ω⊂Rn. There are two variants of our algorithm, both of which are adapted from finite difference methods that we proposed for discretizing delta functions in [J.D. Towers, Two methods for discretizing a delta function supported on a level set, J. Comput. Phys. 220 (2007) 915-931; J.D. Towers, Discretizing delta functions via finite differences and gradient normalization, Preprint at http://www.miracosta.edu/home/jtowers/; J.D. Towers, A convergence rate theorem for finite difference approximations to delta functions, J. Comput. Phys. 227 (2008) 6591-6597]. We consider our approximate Heaviside functions as they are used to approximate integrals over Ω. We prove that our first approximate Heaviside function leads to second order accurate quadrature algorithms. Numerical experiments verify this second order accuracy. For our second algorithm, numerical experiments indicate at least third order accuracy if the integrand f and ∂Ω are sufficiently smooth. Numerical experiments also indicate that our approximations are effective when used to discretize certain singular source terms in partial differential equations. We mostly focus on smooth f and u. By this we mean that f is smooth in a neighborhood of Ω, u is smooth in a neighborhood of ∂Ω, and the level set u(x)=0 is a manifold of codimension one. However, our algorithms still give reasonable results if either f or u has jumps in its derivatives. Numerical experiments indicate approximately second order accuracy for both algorithms if the regularity of the data is reduced in this way, assuming that the level set u(x)=0 is a manifold. Numerical experiments indicate that dependence on the placement of Ω with respect to the grid is quite small for our algorithms. Specifically, a grid shift results in an O(hp) change in the computed solution
Approximation methods for stochastic petri nets
NASA Technical Reports Server (NTRS)
Jungnitz, Hauke Joerg
1992-01-01
Stochastic Marked Graphs are a concurrent decision free formalism provided with a powerful synchronization mechanism generalizing conventional Fork Join Queueing Networks. In some particular cases the analysis of the throughput can be done analytically. Otherwise the analysis suffers from the classical state explosion problem. Embedded in the divide and conquer paradigm, approximation techniques are introduced for the analysis of stochastic marked graphs and Macroplace/Macrotransition-nets (MPMT-nets), a new subclass introduced herein. MPMT-nets are a subclass of Petri nets that allow limited choice, concurrency and sharing of resources. The modeling power of MPMT is much larger than that of marked graphs, e.g., MPMT-nets can model manufacturing flow lines with unreliable machines and dataflow graphs where choice and synchronization occur. The basic idea leads to the notion of a cut to split the original net system into two subnets. The cuts lead to two aggregated net systems where one of the subnets is reduced to a single transition. A further reduction leads to a basic skeleton. The generalization of the idea leads to multiple cuts, where single cuts can be applied recursively leading to a hierarchical decomposition. Based on the decomposition, a response time approximation technique for the performance analysis is introduced. Also, delay equivalence, which has previously been introduced in the context of marked graphs by Woodside et al., Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's is slower, but the accuracy is generally better. Delay
Approximate Methods for State-Space Models.
Koyama, Shinsuke; Pérez-Bolde, Lucia Castellanos; Shalizi, Cosma Rohilla; Kass, Robert E
2010-03-01
State-space models provide an important body of techniques for analyzing time-series, but their use requires estimating unobserved states. The optimal estimate of the state is its conditional expectation given the observation histories, and computing this expectation is hard when there are nonlinearities. Existing filtering methods, including sequential Monte Carlo, tend to be either inaccurate or slow. In this paper, we study a nonlinear filter for nonlinear/non-Gaussian state-space models, which uses Laplace's method, an asymptotic series expansion, to approximate the state's conditional mean and variance, together with a Gaussian conditional distribution. This Laplace-Gaussian filter (LGF) gives fast, recursive, deterministic state estimates, with an error which is set by the stochastic characteristics of the model and is, we show, stable over time. We illustrate the estimation ability of the LGF by applying it to the problem of neural decoding and compare it to sequential Monte Carlo both in simulations and with real data. We find that the LGF can deliver superior results in a small fraction of the computing time.
Hu, Jie; Luo, Meng; Jiang, Feng; Xu, Rui-Xue; Yan, Yijing
2011-06-28
Padé spectrum decomposition is an optimal sum-over-poles expansion scheme of Fermi function and Bose function [J. Hu, R. X. Xu, and Y. J. Yan, J. Chem. Phys. 133, 101106 (2010)]. In this work, we report two additional members to this family, from which the best among all sum-over-poles methods could be chosen for different cases of application. Methods are developed for determining these three Padé spectrum decomposition expansions at machine precision via simple algorithms. We exemplify the applications of present development with optimal construction of hierarchical equations-of-motion formulations for nonperturbative quantum dissipation and quantum transport dynamics. Numerical demonstrations are given for two systems. One is the transient transport current to an interacting quantum-dots system, together with the involved high-order co-tunneling dynamics. Another is the non-Markovian dynamics of a spin-boson system.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.
A new approximation method for stress constraints in structural synthesis
NASA Technical Reports Server (NTRS)
Vanderplaats, Garret N.; Salajegheh, Eysa
1987-01-01
A new approximation method for dealing with stress constraints in structural synthesis is presented. The finite element nodal forces are approximated and these are used to create an explicit, but often nonlinear, approximation to the original problem. The principal motivation is to create the best approximation possible, in order to reduce the number of detailed finite element analyses needed to reach the optimum. Examples are offered and compared with published results, to demonstrate the efficiency and reliability of the proposed method.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.
Sensitivity analysis and approximation methods for general eigenvalue problems
NASA Technical Reports Server (NTRS)
Murthy, D. V.; Haftka, R. T.
1986-01-01
Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.
Approximate inverse preconditioning of iterative methods for nonsymmetric linear systems
Benzi, M.; Tuma, M.
1996-12-31
A method for computing an incomplete factorization of the inverse of a nonsymmetric matrix A is presented. The resulting factorized sparse approximate inverse is used as a preconditioner in the iterative solution of Ax = b by Krylov subspace methods.
An approximation method for configuration optimization of trusses
NASA Technical Reports Server (NTRS)
Hansen, Scott R.; Vanderplaats, Garret N.
1988-01-01
Two- and three-dimensional elastic trusses are designed for minimum weight by varying the areas of the members and the location of the joints. Constraints on member stresses and Euler buckling are imposed and multiple static loading conditions are considered. The method presented here utilizes an approximate structural analysis based on first order Taylor series expansions of the member forces. A numerical optimizer minimizes the weight of the truss using information from the approximate structural analysis. Comparisons with results from other methods are made. It is shown that the method of forming an approximate structural analysis based on linearized member forces leads to a highly efficient method of truss configuration optimization.
Discontinuous Galerkin method based on non-polynomial approximation spaces
Yuan Ling . E-mail: lyuan@dam.brown.edu; Shu Chiwang . E-mail: shu@dam.brown.edu
2006-10-10
In this paper, we develop discontinuous Galerkin (DG) methods based on non-polynomial approximation spaces for numerically solving time dependent hyperbolic and parabolic and steady state hyperbolic and elliptic partial differential equations (PDEs). The algorithm is based on approximation spaces consisting of non-polynomial elementary functions such as exponential functions, trigonometric functions, etc., with the objective of obtaining better approximations for specific types of PDEs and initial and boundary conditions. It is shown that L {sup 2} stability and error estimates can be obtained when the approximation space is suitably selected. It is also shown with numerical examples that a careful selection of the approximation space to fit individual PDE and initial and boundary conditions often provides more accurate results than the DG methods based on the polynomial approximation spaces of the same order of accuracy.
Mapping biological entities using the longest approximately common prefix method
2014-01-01
Background The significant growth in the volume of electronic biomedical data in recent decades has pointed to the need for approximate string matching algorithms that can expedite tasks such as named entity recognition, duplicate detection, terminology integration, and spelling correction. The task of source integration in the Unified Medical Language System (UMLS) requires considerable expert effort despite the presence of various computational tools. This problem warrants the search for a new method for approximate string matching and its UMLS-based evaluation. Results This paper introduces the Longest Approximately Common Prefix (LACP) method as an algorithm for approximate string matching that runs in linear time. We compare the LACP method for performance, precision and speed to nine other well-known string matching algorithms. As test data, we use two multiple-source samples from the Unified Medical Language System (UMLS) and two SNOMED Clinical Terms-based samples. In addition, we present a spell checker based on the LACP method. Conclusions The Longest Approximately Common Prefix method completes its string similarity evaluations in less time than all nine string similarity methods used for comparison. The Longest Approximately Common Prefix outperforms these nine approximate string matching methods in its Maximum F1 measure when evaluated on three out of the four datasets, and in its average precision on two of the four datasets. PMID:24928653
Comparison of interpolation and approximation methods for optical freeform synthesis
NASA Astrophysics Data System (ADS)
Voznesenskaya, Anna; Krizskiy, Pavel
2017-06-01
Interpolation and approximation methods for freeform surface synthesis are analyzed using the developed software tool. Special computer tool is developed and results of freeform surface modeling with piecewise linear interpolation, piecewise quadratic interpolation, cubic spline interpolation, Lagrange polynomial interpolation are considered. The most accurate interpolation method is recommended. Surface profiles are approximated with the square least method. The freeform systems are generated in optical design software.
A simple approximation method for obtaining the spanwise lift distribution
NASA Technical Reports Server (NTRS)
Schrenk, O
1940-01-01
The approximation method described makes possible lift-distribution computations in a few minutes. Comparison with an exact method shows satisfactory agreement. The method is of greater applicability than the exact method and includes also the important case of the wing with end plates.
An approximation method for fractional integro-differential equations
NASA Astrophysics Data System (ADS)
Emiroglu, Ibrahim
2015-12-01
In this work, an approximation method is proposed for fractional order linear Fredholm type integrodifferential equations with boundary conditions. The Sinc collocation method is applied to the examples and its efficiency and strength is also discussed by some special examples. The results of the proposed method are compared to the available analytic solutions.
Double power series method for approximating cosmological perturbations
NASA Astrophysics Data System (ADS)
Wren, Andrew J.; Malik, Karim A.
2017-04-01
We introduce a double power series method for finding approximate analytical solutions for systems of differential equations commonly found in cosmological perturbation theory. The method was set out, in a noncosmological context, by Feshchenko, Shkil' and Nikolenko (FSN) in 1966, and is applicable to cases where perturbations are on subhorizon scales. The FSN method is essentially an extension of the well known Wentzel-Kramers-Brillouin (WKB) method for finding approximate analytical solutions for ordinary differential equations. The FSN method we use is applicable well beyond perturbation theory to solve systems of ordinary differential equations, linear in the derivatives, that also depend on a small parameter, which here we take to be related to the inverse wave-number. We use the FSN method to find new approximate oscillating solutions in linear order cosmological perturbation theory for a flat radiation-matter universe. Together with this model's well-known growing and decaying Mészáros solutions, these oscillating modes provide a complete set of subhorizon approximations for the metric potential, radiation and matter perturbations. Comparison with numerical solutions of the perturbation equations shows that our approximations can be made accurate to within a typical error of 1%, or better. We also set out a heuristic method for error estimation. A Mathematica notebook which implements the double power series method is made available online.
Dual methods and approximation concepts in structural synthesis
NASA Technical Reports Server (NTRS)
Fleury, C.; Schmit, L. A., Jr.
1980-01-01
Approximation concepts and dual method algorithms are combined to create a method for minimum weight design of structural systems. Approximation concepts convert the basic mathematical programming statement of the structural synthesis problem into a sequence of explicit primal problems of separable form. These problems are solved by constructing explicit dual functions, which are maximized subject to nonnegativity constraints on the dual variables. It is shown that the joining together of approximation concepts and dual methods can be viewed as a generalized optimality criteria approach. The dual method is successfully extended to deal with pure discrete and mixed continuous-discrete design variable problems. The power of the method presented is illustrated with numerical results for example problems, including a metallic swept wing and a thin delta wing with fiber composite skins.
Efficient variational Bayesian approximation method based on subspace optimization.
Zheng, Yuling; Fraysse, Aurélia; Rodet, Thomas
2015-02-01
Variational Bayesian approximations have been widely used in fully Bayesian inference for approximating an intractable posterior distribution by a separable one. Nevertheless, the classical variational Bayesian approximation (VBA) method suffers from slow convergence to the approximate solution when tackling large dimensional problems. To address this problem, we propose in this paper a more efficient VBA method. Actually, variational Bayesian issue can be seen as a functional optimization problem. The proposed method is based on the adaptation of subspace optimization methods in Hilbert spaces to the involved function space, in order to solve this optimization problem in an iterative way. The aim is to determine an optimal direction at each iteration in order to get a more efficient method. We highlight the efficiency of our new VBA method and demonstrate its application to image processing by considering an ill-posed linear inverse problem using a total variation prior. Comparisons with state of the art variational Bayesian methods through a numerical example show a notable improvement in computation time.
Improved stochastic approximation methods for discretized parabolic partial differential equations
NASA Astrophysics Data System (ADS)
Guiaş, Flavius
2016-12-01
We present improvements of the stochastic direct simulation method, a known numerical scheme based on Markov jump processes which is used for approximating solutions of ordinary differential equations. This scheme is suited especially for spatial discretizations of evolution partial differential equations (PDEs). By exploiting the full path simulation of the stochastic method, we use this first approximation as a predictor and construct improved approximations by Picard iterations, Runge-Kutta steps, or a combination. This has as consequence an increased order of convergence. We illustrate the features of the improved method at a standard benchmark problem, a reaction-diffusion equation modeling a combustion process in one space dimension (1D) and two space dimensions (2D).
Successive approximation method for Caputo q-fractional IVPs
NASA Astrophysics Data System (ADS)
Salahshour, Soheil; Ahmadian, Ali; Chan, Chee Seng
2015-07-01
Recently, Abdeljawad and Baleanu (2011) introduced Caputo q-fractional derivatives and used it to solve Caputo q-fractional initial value problem. For this purpose, they applied successive approximation method to obtain an explicit solution; but did not clarify under which conditions that this method will be convergence. In this paper, we propose q-Krasnoselskii-Krein type condition to investigate the convergence of the method.
Efficient solution of parabolic equations by Krylov approximation methods
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Y.
1990-01-01
Numerical techniques for solving parabolic equations by the method of lines is addressed. The main motivation for the proposed approach is the possibility of exploiting a high degree of parallelism in a simple manner. The basic idea of the method is to approximate the action of the evolution operator on a given state vector by means of a projection process onto a Krylov subspace. Thus, the resulting approximation consists of applying an evolution operator of a very small dimension to a known vector which is, in turn, computed accurately by exploiting well-known rational approximations to the exponential. Because the rational approximation is only applied to a small matrix, the only operations required with the original large matrix are matrix-by-vector multiplications, and as a result the algorithm can easily be parallelized and vectorized. Some relevant approximation and stability issues are discussed. We present some numerical experiments with the method and compare its performance with a few explicit and implicit algorithms.
Approximate Design Method for Single Stage Pulse Tube Refrigerators
NASA Astrophysics Data System (ADS)
Pfotenhauer, J. M.; Gan, Z. H.; Radebaugh, R.
2008-03-01
An approximate design method is presented for the design of a single stage Stirling type pulse tube refrigerator. The design method begins from a defined cooling power, operating temperature, average and dynamic pressure, and frequency. Using a combination of phasor analysis, approximate correlations derived from extensive use of REGEN3.2, a few `rules of thumb,' and available models for inertance tubes, a process is presented to define appropriate geometries for the regenerator, pulse tube and inertance tube components. In addition, specifications for the acoustic power and phase between the pressure and flow required from the compressor are defined. The process enables an appreciation of the primary physical parameters operating within the pulse tube refrigerator, but relies on approximate values for the combined loss mechanisms. The defined geometries can provide both a useful starting point, and a sanity check, for more sophisticated design methodologies.
Multi-level methods and approximating distribution functions
NASA Astrophysics Data System (ADS)
Wilson, D.; Baker, R. E.
2016-07-01
Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie's direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie's direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146-179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.
Multi-level methods and approximating distribution functions
Wilson, D. Baker, R. E.
2016-07-15
Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.
Approximate Newton-type methods via theory of control
NASA Astrophysics Data System (ADS)
Yap, Chui Ying; Leong, Wah June
2014-12-01
In this paper, we investigate the possible use of control theory, particularly theory on optimal control to derive some numerical methods for unconstrained optimization problems. Based upon this control theory, we derive a Levenberg-Marquardt-like method that guarantees greatest descent in a particular search region. The implementation of this method in its original form requires inversion of a non-sparse matrix or equivalently solving a linear system in every iteration. Thus, an approximation of the proposed method via quasi-Newton update is constructed. Numerical results indicate that the new method is more effective and practical.
Calculating Resonance Positions and Widths Using the Siegert Approximation Method
ERIC Educational Resources Information Center
Rapedius, Kevin
2011-01-01
Here, we present complex resonance states (or Siegert states) that describe the tunnelling decay of a trapped quantum particle from an intuitive point of view that naturally leads to the easily applicable Siegert approximation method. This can be used for analytical and numerical calculations of complex resonances of both the linear and nonlinear…
Using Propensity Score Methods to Approximate Factorial Experimental Designs
ERIC Educational Resources Information Center
Dong, Nianbo
2011-01-01
The purpose of this study is through Monte Carlo simulation to compare several propensity score methods in approximating factorial experimental design and identify best approaches in reducing bias and mean square error of parameter estimates of the main and interaction effects of two factors. Previous studies focused more on unbiased estimates of…
Calculating Resonance Positions and Widths Using the Siegert Approximation Method
ERIC Educational Resources Information Center
Rapedius, Kevin
2011-01-01
Here, we present complex resonance states (or Siegert states) that describe the tunnelling decay of a trapped quantum particle from an intuitive point of view that naturally leads to the easily applicable Siegert approximation method. This can be used for analytical and numerical calculations of complex resonances of both the linear and nonlinear…
Methods to approximate reliabilities in single-step genomic evaluation
USDA-ARS?s Scientific Manuscript database
Reliability of predictions from single-step genomic BLUP (ssGBLUP) can be calculated by inversion, but that is not feasible for large data sets. Two methods of approximating reliability were developed based on decomposition of a function of reliability into contributions from records, pedigrees, and...
Spin-1 Heisenberg ferromagnet using pair approximation method
Mert, Murat; Mert, Gülistan; Kılıç, Ahmet
2016-06-08
Thermodynamic properties for Heisenberg ferromagnet with spin-1 on the simple cubic lattice have been calculated using pair approximation method. We introduce the single-ion anisotropy and the next-nearest-neighbor exchange interaction. We found that for negative single-ion anisotropy parameter, the internal energy is positive and heat capacity has two peaks.
Capturing correlations in chaotic diffusion by approximation methods.
Knight, Georgie; Klages, Rainer
2011-10-01
We investigate three different methods for systematically approximating the diffusion coefficient of a deterministic random walk on the line that contains dynamical correlations that change irregularly under parameter variation. Capturing these correlations by incorporating higher-order terms, all schemes converge to the analytically exact result. Two of these methods are based on expanding the Taylor-Green-Kubo formula for diffusion, while the third method approximates Markov partitions and transition matrices by using a slight variation of the escape rate theory of chaotic diffusion. We check the practicability of the different methods by working them out analytically and numerically for a simple one-dimensional map, study their convergence, and critically discuss their usefulness in identifying a possible fractal instability of parameter-dependent diffusion, in the case of dynamics where exact results for the diffusion coefficient are not available.
An approximate method for calculating aircraft downwash on parachute trajectories
Strickland, J.H.
1989-01-01
An approximate method for calculating velocities induced by aircraft on parachute trajectories is presented herein. A simple system of quadrilateral vortex panels is used to model the aircraft wing and its wake. The purpose of this work is to provide a simple analytical tool which can be used to approximate the effect of aircraft-induced velocities on parachute performance. Performance issues such as turnover and wake recontact may be strongly influenced by velocities induced by the wake of the delivering aircraft, especially if the aircraft is maneuvering at the time of parachute deployment. 7 refs., 9 figs.
Approximate method of designing a two-element airfoil
NASA Astrophysics Data System (ADS)
Abzalilov, D. F.; Mardanov, R. F.
2011-09-01
An approximate method is proposed for designing a two-element airfoil. The method is based on reducing an inverse boundary-value problem in a doubly connected domain to a problem in a singly connected domain located on a multisheet Riemann surface. The essence of the method is replacement of channels between the airfoil elements by channels of flow suction and blowing. The shape of these channels asymptotically tends to the annular shape of channels passing to infinity on the second sheet of the Riemann surface. The proposed method can be extended to designing multielement airfoils.
Source Localization using Stochastic Approximation and Least Squares Methods
Sahyoun, Samir S.; Djouadi, Seddik M.; Qi, Hairong; Drira, Anis
2009-03-05
This paper presents two approaches to locate the source of a chemical plume; Nonlinear Least Squares and Stochastic Approximation (SA) algorithms. Concentration levels of the chemical measured by special sensors are used to locate this source. Non-linear Least Squares technique is applied at different noise levels and compared with the localization using SA. For a noise corrupted data collected from a distributed set of chemical sensors, we show that SA methods are more efficient than Least Squares method. SA methods are often better at coping with noisy input information than other search methods.
Interfacing Relativistic and Nonrelativistic Methods: A Systematic Sequence of Approximations
NASA Technical Reports Server (NTRS)
Dyall, Ken; Langhoff, Stephen R. (Technical Monitor)
1997-01-01
A systematic sequence of approximations for the introduction of relativistic effects into nonrelativistic molecular finite-basis set calculations is described. The theoretical basis for the approximations is the normalized elimination of the small component (ESC) within the matrix representation of the modified Dirac equation. The key features of the normalized method are the retention of the relativistic metric and the ability to define a single matrix U relating the pseudo-large and large component coefficient matrices. This matrix is used to define a modified set of one- and two-electron integrals which have the same appearance as the integrals of the Breit-Pauli Hamiltonian. The first approximation fixes the ratios of the large and pseudo-large components to their atomic values, producing an expansion in atomic 4-spinors. The second approximation defines a local fine-structure constant on each atomic centre, which has the physical value for centres considered to be relativistic and zero for nonrelativistic centres. In the latter case, the 4-spinors are the positive-energy kinetic al ly-balanced solutions of the Levy-Leblond equation, and the integrals involving pseudo-large component basis functions on these centres, are set to zero. Some results are presented for test systems to illustrate the various approximations.
A hybrid approximation method for solving Hutchinson's equation
NASA Astrophysics Data System (ADS)
Marzban, Hamid Reza; Tabrizidooz, Hamid Reza
2012-01-01
The hybrid function approximation method for solving Hutchinson's equation which is a nonlinear delay partial differential equation, is investigated. The properties of hybrid of block-pulse functions and Lagrange interpolating polynomials based on Legendre-Gauss-type points are presented and are utilized to replace the system of nonlinear delay differential equations resulting from the application of Legendre pseudospectral method, by a system of nonlinear algebraic equations. The validity and applicability of the proposed method are demonstrated through two illustrative examples on Hutchinson's equation.
Parallel iterative solvers and preconditioners using approximate hierarchical methods
Grama, A.; Kumar, V.; Sameh, A.
1996-12-31
In this paper, we report results of the performance, convergence, and accuracy of a parallel GMRES solver for Boundary Element Methods. The solver uses a hierarchical approximate matrix-vector product based on a hybrid Barnes-Hut / Fast Multipole Method. We study the impact of various accuracy parameters on the convergence and show that with minimal loss in accuracy, our solver yields significant speedups. We demonstrate the excellent parallel efficiency and scalability of our solver. The combined speedups from approximation and parallelism represent an improvement of several orders in solution time. We also develop fast and paralellizable preconditioners for this problem. We report on the performance of an inner-outer scheme and a preconditioner based on truncated Green`s function. Experimental results on a 256 processor Cray T3D are presented.
Local Approximation and Hierarchical Methods for Stochastic Optimization
NASA Astrophysics Data System (ADS)
Cheng, Bolong
In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the
A multiscale two-point flux-approximation method
Møyner, Olav Lie, Knut-Andreas
2014-10-15
A large number of multiscale finite-volume methods have been developed over the past decade to compute conservative approximations to multiphase flow problems in heterogeneous porous media. In particular, several iterative and algebraic multiscale frameworks that seek to reduce the fine-scale residual towards machine precision have been presented. Common for all such methods is that they rely on a compatible primal–dual coarse partition, which makes it challenging to extend them to stratigraphic and unstructured grids. Herein, we propose a general idea for how one can formulate multiscale finite-volume methods using only a primal coarse partition. To this end, we use two key ingredients that are computed numerically: (i) elementary functions that correspond to flow solutions used in transmissibility upscaling, and (ii) partition-of-unity functions used to combine elementary functions into basis functions. We exemplify the idea by deriving a multiscale two-point flux-approximation (MsTPFA) method, which is robust with regards to strong heterogeneities in the permeability field and can easily handle general grids with unstructured fine- and coarse-scale connections. The method can easily be adapted to arbitrary levels of coarsening, and can be used both as a standalone solver and as a preconditioner. Several numerical experiments are presented to demonstrate that the MsTPFA method can be used to solve elliptic pressure problems on a wide variety of geological models in a robust and efficient manner.
Asymptotic approximation method of force reconstruction: Proof of concept
NASA Astrophysics Data System (ADS)
Sanchez, J.; Benaroya, H.
2017-08-01
An important problem in engineering is the determination of the system input based on the system response. This type of problem is difficult to solve as it is often ill-defined, and produces inaccurate or non-unique results. Current reconstruction techniques typically involve the employment of optimization methods or additional constraints to regularize the problem, but these methods are not without their flaws as they may be sub-optimally applied and produce inadequate results. An alternative approach is developed that draws upon concepts from control systems theory, the equilibrium analysis of linear dynamical systems with time-dependent inputs, and asymptotic approximation analysis. This paper presents the theoretical development of the proposed method. A simple application of the method is presented to demonstrate the procedure. A more complex application to a continuous system is performed to demonstrate the applicability of the method.
Analytic approximations to the modon dispersion relation. [in oceanography
NASA Technical Reports Server (NTRS)
Boyd, J. P.
1981-01-01
Three explicit analytic approximations are given to the modon dispersion relation developed by Flierl et al. (1980) to describe Gulf Stream rings and related phenomena in the oceans and atmosphere. The solutions are in the form of k(q), and are developed in the form of a power series in q for small q, an inverse power series in 1/q for large q, and a two-point Pade approximant. The low order Pade approximant is shown to yield a solution for the dispersion relation with a maximum relative error for the lowest branch of the function equal to one in 700 in the q interval zero to infinity.
Separable approximation method for two-body relativistic scattering
NASA Astrophysics Data System (ADS)
Tandy, P. C.; Thaler, R. M.
1988-03-01
A method for defining a separable approximation to a given interaction within a two-body relativistic equation, such as the Bethe-Salpeter equation, is presented. The rank-N separable representation given here permits exact reproduction of the T matrix on the mass shell and half off the mass shell at N selected bound state and/or continuum values of the invariant mass. The method employed is a four-space generalization of the separable representation developed for Schrödinger interactions by Ernst, Shakin, and Thaler, supplemented by procedures for dealing with the relativistic spin structure in the case of Dirac particles.
Advances in dual algorithms and convex approximation methods
NASA Technical Reports Server (NTRS)
Smaoui, H.; Fleury, C.; Schmit, L. A.
1988-01-01
A new algorithm for solving the duals of separable convex optimization problems is presented. The algorithm is based on an active set strategy in conjunction with a variable metric method. This first order algorithm is more reliable than Newton's method used in DUAL-2 because it does not break down when the Hessian matrix becomes singular or nearly singular. A perturbation technique is introduced in order to remove the nondifferentiability of the dual function which arises when linear constraints are present in the approximate problem.
The Caratheodory-Fejer Method for Real Rational Approximation,
1981-10-01
M H GUTKNECHT N01-75-C-1132 UNCLASSIFIED STAN-NA-81-15 IIEEEEEEEEEEE L11-205 ~jj 2 I’tll H’O IIN W HR !%OM j~j 2A LEVEL 00 Ott a DTIC ChELECTEP...FEB 5 1982 ZBUTION STATrMENT A jLpproved for public releaqw, Distribution Unlimited The Carath4iodory-Fej4r method for real rational approximation Lloyd...angewandte MathematikB E’idgen6ssischc Technische Iiochschule 8092 Zurich, Switzerland Abstract. A "Carath6odory-Fej6r method" is presented ror near-best
A Surface Approximation Method for Image and Video Correspondences.
Huang, Jingwei; Wang, Bin; Wang, Wenping; Sen, Pradeep
2015-12-01
Although finding correspondences between similar images is an important problem in image processing, the existing algorithms cannot find accurate and dense correspondences in images with significant changes in lighting/transformation or with the non-rigid objects. This paper proposes a novel method for finding accurate and dense correspondences between images even in these difficult situations. Starting with the non-rigid dense correspondence algorithm [1] to generate an initial correspondence map, we propose a new geometric filter that uses cubic B-Spline surfaces to approximate the correspondence mapping functions for shared objects in both images, thereby eliminating outliers and noise. We then propose an iterative algorithm which enlarges the region containing valid correspondences. Compared with the existing methods, our method is more robust to significant changes in lighting, color, or viewpoint. Furthermore, we demonstrate how to extend our surface approximation method to video editing by first generating a reliable correspondence map between a given source frame and each frame of a video. The user can then edit the source frame, and the changes are automatically propagated through the entire video using the correspondence map. To evaluate our approach, we examine applications of unsupervised image recognition and video texture editing, and show that our algorithm produces better results than those from state-of-the-art approaches.
Visualizations for genetic assignment analyses using the saddlepoint approximation method.
McMillan, L F; Fewster, R M
2017-09-01
We propose a method for visualizing genetic assignment data by characterizing the distribution of genetic profiles for each candidate source population. This method enhances the assignment method of Rannala and Mountain (1997) by calculating appropriate graph positions for individuals for which some genetic data are missing. An individual with missing data is positioned in the distributions of genetic profiles for a population according to its estimated quantile based on its available data. The quantiles of the genetic profile distribution for each population are calculated by approximating the cumulative distribution function (CDF) using the saddlepoint method, and then inverting the CDF to get the quantile function. The saddlepoint method also provides a way to visualize assignment results calculated using the leave-one-out procedure. This new method offers an advance upon assignment software such as geneclass2, which provides no visualization method, and is biologically more interpretable than the bar charts provided by the software structure. We show results from simulated data and apply the methods to microsatellite genotype data from ship rats (Rattus rattus) captured on the Great Barrier Island archipelago, New Zealand. The visualization method makes it straightforward to detect features of population structure and to judge the discriminative power of the genetic data for assigning individuals to source populations. © 2017, The International Biometric Society.
1991-01-29
the underpotential deposition of metals on an electrode, and obtain voltammograms that show the sharp spikes seen in recent experiments. 20 DISTRIBUTION...recursion relation. and can be computed from the fugacity series in closed form. We apply this approxiant to the underpotential deposition of metals on an...the sudden formation of films at electrodes. It has been possible to perform structural analysis of underpotential deposits of metallic monolavers [4
Globbic approximation in low-resolution direct-methods phasing.
Guo, D Y; Blessing, R H; Langs, D A
2000-09-01
Probabilistic direct-methods phasing theory, originally based on a uniform atomic distribution hypothesis, is shown to be adaptable to a non-uniform bulk-solvent-compensated globbic approximation for protein crystals at low resolution. The effective number n(g) of non-H protein atoms per polyatomic glob increases with decreasing resolution; low-resolution phases depend on the positions of only N(g) = N(a)/n(g) globs rather than N(a) atoms. Test calculations were performed with measured structure-factor data and the refined structural parameters from a protein crystal with approximately 10 000 non-H protein atoms per molecule and approximately 60% solvent volume. Low-resolution data sets with d(min) ranging from 15 to 5 A gave n(g) = ad(min) + b, with a = 1.0 A(-1) and b = -1.9 for the test case. Results of tangent-formula phase-estimation trials emphasize that completeness of the low-resolution data is critically important for probabilistic phasing.
Finite amplitude method for the quasiparticle random-phase approximation
Avogadro, Paolo; Nakatsukasa, Takashi
2011-07-15
We present the finite amplitude method (FAM), originally proposed in Ref. [17], for superfluid systems. A Hartree-Fock-Bogoliubov code may be transformed into a code of the quasiparticle-random-phase approximation (QRPA) with simple modifications. This technique has advantages over the conventional QRPA calculations, such as coding feasibility and computational cost. We perform the fully self-consistent linear-response calculation for the spherical neutron-rich nucleus {sup 174}Sn, modifying the hfbrad code, to demonstrate the accuracy, feasibility, and usefulness of the FAM.
Proton Form Factor Measurements Using Polarization Method: Beyond Born Approximation
Pentchev, Lubomir
2008-10-13
Significant theoretical and experimental efforts have been made over the past 7 years aiming to explain the discrepancy between the proton form factor ratio data obtained at JLab using the polarization method and the previous Rosenbluth measurements. Preliminary results from the first high precision polarization experiment dedicated to study effects beyond Born approximation will be presented. The ratio of the transferred polarization components and, separately, the longitudinal polarization in ep elastic scattering have been measured at a fixed Q{sup 2} of 2.5 GeV{sup 2} over a wide kinematic range. The two quantities impose constraints on the real part of the ep elastic amplitudes.
Parabolic approximation method for the mode conversion-tunneling equation
Phillips, C.K.; Colestock, P.L.; Hwang, D.Q.; Swanson, D.G.
1987-07-01
The derivation of the wave equation which governs ICRF wave propagation, absorption, and mode conversion within the kinetic layer in tokamaks has been extended to include diffraction and focussing effects associated with the finite transverse dimensions of the incident wavefronts. The kinetic layer considered consists of a uniform density, uniform temperature slab model in which the equilibrium magnetic field is oriented in the z-direction and varies linearly in the x-direction. An equivalent dielectric tensor as well as a two-dimensional energy conservation equation are derived from the linearized Vlasov-Maxwell system of equations. The generalized form of the mode conversion-tunneling equation is then extracted from the Maxwell equations, using the parabolic approximation method in which transverse variations of the wave fields are assumed to be weak in comparison to the variations in the primary direction of propagation. Methods of solving the generalized wave equation are discussed. 16 refs.
Approximation method to compute domain related integrals in structural studies
NASA Astrophysics Data System (ADS)
Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.
2015-11-01
Various engineering calculi use integral calculus in theoretical models, i.e. analytical and numerical models. For usual problems, integrals have mathematical exact solutions. If the domain of integration is complicated, there may be used several methods to calculate the integral. The first idea is to divide the domain in smaller sub-domains for which there are direct calculus relations, i.e. in strength of materials the bending moment may be computed in some discrete points using the graphical integration of the shear force diagram, which usually has a simple shape. Another example is in mathematics, where the surface of a subgraph may be approximated by a set of rectangles or trapezoids used to calculate the definite integral. The goal of the work is to introduce our studies about the calculus of the integrals in the transverse section domains, computer aided solutions and a generalizing method. The aim of our research is to create general computer based methods to execute the calculi in structural studies. Thus, we define a Boolean algebra which operates with ‘simple’ shape domains. This algebraic standpoint uses addition and subtraction, conditioned by the sign of every ‘simple’ shape (-1 for the shapes to be subtracted). By ‘simple’ shape or ‘basic’ shape we define either shapes for which there are direct calculus relations, or domains for which their frontiers are approximated by known functions and the according calculus is carried out using an algorithm. The ‘basic’ shapes are linked to the calculus of the most significant stresses in the section, refined aspect which needs special attention. Starting from this idea, in the libraries of ‘basic’ shapes, there were included rectangles, ellipses and domains whose frontiers are approximated by spline functions. The domain triangularization methods suggested that another ‘basic’ shape to be considered is the triangle. The subsequent phase was to deduce the exact relations for the
Approximate hard-sphere method for densely packed granular flows
NASA Astrophysics Data System (ADS)
Guttenberg, Nicholas
2011-05-01
The simulation of granular media is usually done either with event-driven codes that treat collisions as instantaneous but have difficulty with very dense packings, or with molecular dynamics (MD) methods that approximate rigid grains using a stiff viscoelastic spring. There is a little-known method that combines several collision events into a single timestep to retain the instantaneous collisions of event-driven dynamics, but also be able to handle dense packings. However, it is poorly characterized as to its regime of validity and failure modes. We present a modification of this method to reduce the introduction of overlap error, and test it using the problem of two-dimensional (2D) granular Couette flow, a densely packed system that has been well characterized by previous work. We find that this method can successfully replicate the results of previous work up to the point of jamming, and that it can do so a factor of 10 faster than comparable MD methods.
Approximate hard-sphere method for densely packed granular flows.
Guttenberg, Nicholas
2011-05-01
The simulation of granular media is usually done either with event-driven codes that treat collisions as instantaneous but have difficulty with very dense packings, or with molecular dynamics (MD) methods that approximate rigid grains using a stiff viscoelastic spring. There is a little-known method that combines several collision events into a single timestep to retain the instantaneous collisions of event-driven dynamics, but also be able to handle dense packings. However, it is poorly characterized as to its regime of validity and failure modes. We present a modification of this method to reduce the introduction of overlap error, and test it using the problem of two-dimensional (2D) granular Couette flow, a densely packed system that has been well characterized by previous work. We find that this method can successfully replicate the results of previous work up to the point of jamming, and that it can do so a factor of 10 faster than comparable MD methods.
Hybrid functionals and GW approximation in the FLAPW method
NASA Astrophysics Data System (ADS)
Friedrich, Christoph; Betzinger, Markus; Schlipf, Martin; Blügel, Stefan; Schindlmayr, Arno
2012-07-01
We present recent advances in numerical implementations of hybrid functionals and the GW approximation within the full-potential linearized augmented-plane-wave (FLAPW) method. The former is an approximation for the exchange-correlation contribution to the total energy functional in density-functional theory, and the latter is an approximation for the electronic self-energy in the framework of many-body perturbation theory. All implementations employ the mixed product basis, which has evolved into a versatile basis for the products of wave functions, describing the incoming and outgoing states of an electron that is scattered by interacting with another electron. It can thus be used for representing the nonlocal potential in hybrid functionals as well as the screened interaction and related quantities in GW calculations. In particular, the six-dimensional space integrals of the Hamiltonian exchange matrix elements (and exchange self-energy) decompose into sums over vector-matrix-vector products, which can be evaluated easily. The correlation part of the GW self-energy, which contains a time or frequency dependence, is calculated on the imaginary frequency axis with a subsequent analytic continuation to the real axis or, alternatively, by a direct frequency convolution of the Green function G and the dynamically screened Coulomb interaction W along a contour integration path that avoids the poles of the Green function. Hybrid-functional and GW calculations are notoriously computationally expensive. We present a number of tricks that reduce the computational cost considerably, including the use of spatial and time-reversal symmetries, modifications of the mixed product basis with the aim to optimize it for the correlation self-energy and another modification that makes the Coulomb matrix sparse, analytic expansions of the interaction potentials around the point of divergence at k = 0, and a nested density and density-matrix convergence scheme for hybrid
Atomistic Modeling of Nanostructures via the BFS Quantum Approximate Method
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Garces, Jorge E.; Noebe, Ronald D.; Farias, D.
2003-01-01
Ideally, computational modeling techniques for nanoscopic physics would be able to perform free of limitations on the type and number of elements, while providing comparable accuracy when dealing with bulk or surface problems. Computational efficiency is also desirable, if not mandatory, for properly dealing with the complexity of typical nano-strucured systems. A quantum approximate technique, the BFS method for alloys, which attempts to meet these demands, is introduced for the calculation of the energetics of nanostructures. The versatility of the technique is demonstrated through analysis of diverse systems, including multi-phase precipitation in a five element Ni-Al-Ti-Cr-Cu alloy and the formation of mixed composition Co-Cu islands on a metallic Cu(III) substrate.
A stochastic approximation method for assigning values to calibrators.
Schlain, B
1998-04-01
A new procedure is provided for transferring analyte concentration values from a reference material to production calibrators. This method is robust to calibration curve-fitting errors and can be accomplished using only one instrument and one set of reagents. An easily implemented stochastic approximation algorithm iteratively finds the appropriate analyte level of a standard prepared from a reference material that will yield the same average signal response as the new production calibrator. Alternatively, a production bulk calibrator material can be iteratively adjusted to give the same average signal response as some prespecified, fixed reference standard. In either case, the outputted value assignment of the production calibrator is the analyte concentration of the reference standard in the final iteration of the algorithm. Sample sizes are statistically determined as functions of known within-run signal response precisions and user-specified accuracy tolerances.
Multivariate approximation methods and applications to geophysics and geodesy
NASA Technical Reports Server (NTRS)
Munteanu, M. J.
1979-01-01
The first report in a series is presented which is intended to be written by the author with the purpose of treating a class of approximation methods of functions in one and several variables and ways of applying them to geophysics and geodesy. The first report is divided in three parts and is devoted to the presentation of the mathematical theory and formulas. Various optimal ways of representing functions in one and several variables and the associated error when information is had about the function such as satellite data of different kinds are discussed. The framework chosen is Hilbert spaces. Experiments were performed on satellite altimeter data and on satellite to satellite tracking data.
Atomistic Modeling of Nanostructures via the BFS Quantum Approximate Method
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Garces, Jorge E.; Noebe, Ronald D.; Farias, D.
2003-01-01
Ideally, computational modeling techniques for nanoscopic physics would be able to perform free of limitations on the type and number of elements, while providing comparable accuracy when dealing with bulk or surface problems. Computational efficiency is also desirable, if not mandatory, for properly dealing with the complexity of typical nano-strucured systems. A quantum approximate technique, the BFS method for alloys, which attempts to meet these demands, is introduced for the calculation of the energetics of nanostructures. The versatility of the technique is demonstrated through analysis of diverse systems, including multi-phase precipitation in a five element Ni-Al-Ti-Cr-Cu alloy and the formation of mixed composition Co-Cu islands on a metallic Cu(III) substrate.
NASA Astrophysics Data System (ADS)
Olson, Branden; Kleiber, William
2017-04-01
Stochastic precipitation generators (SPGs) produce synthetic precipitation data and are frequently used to generate inputs for physical models throughout many scientific disciplines. Especially for large data sets, statistical parameter estimation is difficult due to the high dimensionality of the likelihood function. We propose techniques to estimate SPG parameters for spatiotemporal precipitation occurrence based on an emerging set of methods called Approximate Bayesian computation (ABC), which bypass the evaluation of a likelihood function. Our statistical model employs a thresholded Gaussian process that reduces to a probit regression at single sites. We identify appropriate ABC penalization metrics for our model parameters to produce simulations whose statistical characteristics closely resemble those of the observations. Spell length metrics are appropriate for single sites, while a variogram-based metric is proposed for spatial simulations. We present numerical case studies at sites in Colorado and Iowa where the estimated statistical model adequately reproduces local and domain statistics.
Introduction to Methods of Approximation in Physics and Astronomy
NASA Astrophysics Data System (ADS)
van Putten, Maurice H. P. M.
2017-04-01
Modern astronomy reveals an evolving Universe rife with transient sources, mostly discovered - few predicted - in multi-wavelength observations. Our window of observations now includes electromagnetic radiation, gravitational waves and neutrinos. For the practicing astronomer, these are highly interdisciplinary developments that pose a novel challenge to be well-versed in astroparticle physics and data analysis. In realizing the full discovery potential of these multimessenger approaches, the latter increasingly involves high-performance supercomputing. These lecture notes developed out of lectures on mathematical-physics in astronomy to advanced undergraduate and beginning graduate students. They are organised to be largely self-contained, starting from basic concepts and techniques in the formulation of problems and methods of approximation commonly used in computation and numerical analysis. This includes root finding, integration, signal detection algorithms involving the Fourier transform and examples of numerical integration of ordinary differential equations and some illustrative aspects of modern computational implementation. In the applications, considerable emphasis is put on fluid dynamical problems associated with accretion flows, as these are responsible for a wealth of high energy emission phenomena in astronomy. The topics chosen are largely aimed at phenomenological approaches, to capture main features of interest by effective methods of approximation at a desired level of accuracy and resolution. Formulated in terms of a system of algebraic, ordinary or partial differential equations, this may be pursued by perturbation theory through expansions in a small parameter or by direct numerical computation. Successful application of these methods requires a robust understanding of asymptotic behavior, errors and convergence. In some cases, the number of degrees of freedom may be reduced, e.g., for the purpose of (numerical) continuation or to identify
NASA Astrophysics Data System (ADS)
Tian, Jiasheng; Tong, Jian; Shi, Jian; Gui, Liangqi
2017-02-01
In this paper a new approximate fast method of calculating the bistatic-scattering coefficients of a multilayer structure with random rough interfaces was presented based on the Kirchhoff Approximation (KA) and the electromagnetic theory of stratified media. First, the electromagnetic scattering from a Gauss rough metal or dielectric surface was calculated by KA method and method of moment (MOM), and the effectiveness of KA method was confirmed and verified. Second, a new approximate fast method was presented to calculate electromagnetic scattering from a multilayer-random-rough surface based on electromagnetic reflection from multilayer parallel surfaces and KA. The calculated results by the new method were in good agreements with those by MOM, especially near the specular point. Finally, a comparison of the new method and MOM was carried out in consuming computing time, memory resources, and complexity. The comparison indicated that the new approximate method was faster by about 30-150 times than MOM. The new approximate fast method could avoid a large matrix inversion and greatly reduce the computation time and memory resources and thus improve the computational efficiency. It was an effective approximation fast analyzing method of electromagnetic scattering from multilayer rough surfaces.
A comparison of computational methods and algorithms for the complex gamma function
NASA Technical Reports Server (NTRS)
Ng, E. W.
1974-01-01
A survey and comparison of some computational methods and algorithms for gamma and log-gamma functions of complex arguments are presented. Methods and algorithms reported include Chebyshev approximations, Pade expansion and Stirling's asymptotic series. The comparison leads to the conclusion that Algorithm 421 published in the Communications of ACM by H. Kuki is the best program either for individual application or for the inclusion in subroutine libraries.
Approximation methods for combined thermal/structural design
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Shore, C. P.
1979-01-01
Two approximation concepts for combined thermal/structural design are evaluated. The first concept is an approximate thermal analysis based on the first derivatives of structural temperatures with respect to design variables. Two commonly used first-order Taylor series expansions are examined. The direct and reciprocal expansions are special members of a general family of approximations, and for some conditions other members of that family of approximations are more accurate. Several examples are used to compare the accuracy of the different expansions. The second approximation concept is the use of critical time points for combined thermal and stress analyses of structures with transient loading conditions. Significant time savings are realized by identifying critical time points and performing the stress analysis for those points only. The design of an insulated panel which is exposed to transient heating conditions is discussed.
Communication: Improved pair approximations in local coupled-cluster methods
NASA Astrophysics Data System (ADS)
Schwilk, Max; Usvyat, Denis; Werner, Hans-Joachim
2015-03-01
In local coupled cluster treatments the electron pairs can be classified according to the magnitude of their energy contributions or distances into strong, close, weak, and distant pairs. Different approximations are introduced for the latter three classes. In this communication, an improved simplified treatment of close and weak pairs is proposed, which is based on long-range cancellations of individually slowly decaying contributions in the amplitude equations. Benchmark calculations for correlation, reaction, and activation energies demonstrate that these approximations work extremely well, while pair approximations based on local second-order Møller-Plesset theory can lead to errors that are 1-2 orders of magnitude larger.
A Binomial Approximation Method for the Ising Model
NASA Astrophysics Data System (ADS)
Streib, Noah; Streib, Amanda; Beichl, Isabel; Sullivan, Francis
2014-08-01
A large portion of the computation required for the partition function of the Ising model can be captured with a simple formula. In this work, we support this claim by defining an approximation to the partition function and other thermodynamic quantities of the Ising model that requires no algorithm at all. This approximation, which uses the high temperature expansion, is solely based on the binomial distribution, and performs very well at low temperatures. At high temperatures, we provide an alternative approximation, which also serves as a lower bound on the partition function and is trivial to compute. We provide theoretical evidence and the results of numerical experiments to support the strength of these approximations.
Kuwahara, Hiroyuki; Myers, Chris J
2008-09-01
Given the substantial computational requirements of stochastic simulation, approximation is essential for efficient analysis of any realistic biochemical system. This paper introduces a new approximation method to reduce the computational cost of stochastic simulations of an enzymatic reaction scheme which in biochemical systems often includes rapidly changing fast reactions with enzyme and enzyme-substrate complex molecules present in very small counts. Our new method removes the substrate dissociation reaction by approximating the passage time of the formation of each enzyme-substrate complex molecule which is destined to a production reaction. This approach skips the firings of unimportant yet expensive reaction events, resulting in a substantial acceleration in the stochastic simulations of enzymatic reactions. Additionally, since all the parameters used in our new approach can be derived by the Michaelis-Menten parameters which can actually be measured from experimental data, applications of this approximation can be practical even without having full knowledge of the underlying enzymatic reaction. Here, we apply this new method to various enzymatic reaction systems, resulting in a speedup of orders of magnitude in temporal behavior analysis without any significant loss in accuracy. Furthermore, we show that our new method can perform better than some of the best existing approximation methods for enzymatic reactions in terms of accuracy and efficiency.
Stochastic Approximation Methods for Latent Regression Item Response Models
ERIC Educational Resources Information Center
von Davier, Matthias; Sinharay, Sandip
2010-01-01
This article presents an application of a stochastic approximation expectation maximization (EM) algorithm using a Metropolis-Hastings (MH) sampler to estimate the parameters of an item response latent regression model. Latent regression item response models are extensions of item response theory (IRT) to a latent variable model with covariates…
Stochastic Approximation Methods for Latent Regression Item Response Models
ERIC Educational Resources Information Center
von Davier, Matthias; Sinharay, Sandip
2010-01-01
This article presents an application of a stochastic approximation expectation maximization (EM) algorithm using a Metropolis-Hastings (MH) sampler to estimate the parameters of an item response latent regression model. Latent regression item response models are extensions of item response theory (IRT) to a latent variable model with covariates…
A method of approximating range size of small mammals
Stickel, L.F.
1965-01-01
In summary, trap success trends appear to provide a useful approximation to range size of easily trapped small mammals such as Peromyscus. The scale of measurement can be adjusted as desired. Further explorations of the usefulness of the plan should be made and modifications possibly developed before adoption.
Approximate Green's function methods for HZE transport in multilayered materials
NASA Technical Reports Server (NTRS)
Wilson, John W.; Badavi, Francis F.; Shinn, Judy L.; Costen, Robert C.
1993-01-01
A nonperturbative analytic solution of the high charge and energy (HZE) Green's function is used to implement a computer code for laboratory ion beam transport in multilayered materials. The code is established to operate on the Langley nuclear fragmentation model used in engineering applications. Computational procedures are established to generate linear energy transfer (LET) distributions for a specified ion beam and target for comparison with experimental measurements. The code was found to be highly efficient and compared well with the perturbation approximation.
On Using a Fast Multipole Method-based Poisson Solver in anApproximate Projection Method
Williams, Sarah A.; Almgren, Ann S.; Puckett, E. Gerry
2006-03-28
Approximate projection methods are useful computational tools for solving the equations of time-dependent incompressible flow.Inthis report we will present a new discretization of the approximate projection in an approximate projection method. The discretizations of divergence and gradient will be identical to those in existing approximate projection methodology using cell-centered values of pressure; however, we will replace inversion of the five-point cell-centered discretization of the Laplacian operator by a Fast Multipole Method-based Poisson Solver (FMM-PS).We will show that the FMM-PS solver can be an accurate and robust component of an approximation projection method for constant density, inviscid, incompressible flow problems. Computational examples exhibiting second-order accuracy for smooth problems will be shown. The FMM-PS solver will be found to be more robust than inversion of the standard five-point cell-centered discretization of the Laplacian for certain time-dependent problems that challenge the robustness of the approximate projection methodology.
Spline methods for approximating quantile functions and generating random samples
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Matthews, C. G.
1985-01-01
Two cubic spline formulations are presented for representing the quantile function (inverse cumulative distribution function) of a random sample of data. Both B-spline and rational spline approximations are compared with analytic representations of the quantile function. It is also shown how these representations can be used to generate random samples for use in simulation studies. Comparisons are made on samples generated from known distributions and a sample of experimental data. The spline representations are more accurate for multimodal and skewed samples and to require much less time to generate samples than the analytic representation.
Approximation of the transport equation by a weighted particle method
Mas-Gallic, S.; Poupaud, F.
1988-08-01
We study a particle method for numerically solving a model equation for the neutron transport. We present the method and develop the theoretical convergence analysis. We prove the stability and the convergence of the method in L/sup infinity/. Some computational test results are given.
Decentralized Bayesian search using approximate dynamic programming methods.
Zhao, Yijia; Patek, Stephen D; Beling, Peter A
2008-08-01
We consider decentralized Bayesian search problems that involve a team of multiple autonomous agents searching for targets on a network of search points operating under the following constraints: 1) interagent communication is limited; 2) the agents do not have the opportunity to agree in advance on how to resolve equivalent but incompatible strategies; and 3) each agent lacks the ability to control or predict with certainty the actions of the other agents. We formulate the multiagent search-path-planning problem as a decentralized optimal control problem and introduce approximate dynamic heuristics that can be implemented in a decentralized fashion. After establishing some analytical properties of the heuristics, we present computational results for a search problem involving two agents on a 5 x 5 grid.
Effective moduli of particulate solids: Lubrication approximation method
NASA Astrophysics Data System (ADS)
Qi, F.; Phan-Thien, N.; X. J. Fan
To efficiently calculate the effective properties of a composite, which consists of rigid spherical inclusions not necessarily of the same sizes in a homogeneous isotropic elastic matrix, a method based on the lubrication forces between neighbouring particles has been developed. The method is used to evaluate the effective Lamé moduli and the Poisson's ratio of the composite, for the particles in random configurations and in cubic lattices. A good agreement with experimental results given by Smith (1975) for particles in random configurations is observed, and also the numerical results on the effective moduli agree well with the results given by Nunan & Keller (1984) for particles in cubic lattices.
An approximate method for determining of investment risk
NASA Astrophysics Data System (ADS)
Slavkova, Maria; Tzenova, Zlatina
2016-12-01
In this work a method for determining of investment risk during all economic states is considered. It is connected to matrix games with two players. A definition for risk in a matrix game is introduced. Three properties are proven. It is considered an appropriate example.
Approximate proximal point methods for convex programming problems
Eggermont, P.
1994-12-31
We study proximal point methods for the finite dimensional convex programming problem minimize f(x) such that x {element_of} C, where f : dom f {contained_in} RIR is a proper convex function and C {contained_in} R is a closed convex set.
SET: a pupil detection method using sinusoidal approximation
Javadi, Amir-Homayoun; Hakimi, Zahra; Barati, Morteza; Walsh, Vincent; Tcheang, Lili
2015-01-01
Mobile eye-tracking in external environments remains challenging, despite recent advances in eye-tracking software and hardware engineering. Many current methods fail to deal with the vast range of outdoor lighting conditions and the speed at which these can change. This confines experiments to artificial environments where conditions must be tightly controlled. Additionally, the emergence of low-cost eye tracking devices calls for the development of analysis tools that enable non-technical researchers to process the output of their images. We have developed a fast and accurate method (known as “SET”) that is suitable even for natural environments with uncontrolled, dynamic and even extreme lighting conditions. We compared the performance of SET with that of two open-source alternatives by processing two collections of eye images: images of natural outdoor scenes with extreme lighting variations (“Natural”); and images of less challenging indoor scenes (“CASIA-Iris-Thousand”). We show that SET excelled in outdoor conditions and was faster, without significant loss of accuracy, indoors. SET offers a low cost eye-tracking solution, delivering high performance even in challenging outdoor environments. It is offered through an open-source MATLAB toolkit as well as a dynamic-link library (“DLL”), which can be imported into many programming languages including C# and Visual Basic in Windows OS (www.eyegoeyetracker.co.uk). PMID:25914641
SET: a pupil detection method using sinusoidal approximation.
Javadi, Amir-Homayoun; Hakimi, Zahra; Barati, Morteza; Walsh, Vincent; Tcheang, Lili
2015-01-01
Mobile eye-tracking in external environments remains challenging, despite recent advances in eye-tracking software and hardware engineering. Many current methods fail to deal with the vast range of outdoor lighting conditions and the speed at which these can change. This confines experiments to artificial environments where conditions must be tightly controlled. Additionally, the emergence of low-cost eye tracking devices calls for the development of analysis tools that enable non-technical researchers to process the output of their images. We have developed a fast and accurate method (known as "SET") that is suitable even for natural environments with uncontrolled, dynamic and even extreme lighting conditions. We compared the performance of SET with that of two open-source alternatives by processing two collections of eye images: images of natural outdoor scenes with extreme lighting variations ("Natural"); and images of less challenging indoor scenes ("CASIA-Iris-Thousand"). We show that SET excelled in outdoor conditions and was faster, without significant loss of accuracy, indoors. SET offers a low cost eye-tracking solution, delivering high performance even in challenging outdoor environments. It is offered through an open-source MATLAB toolkit as well as a dynamic-link library ("DLL"), which can be imported into many programming languages including C# and Visual Basic in Windows OS (www.eyegoeyetracker.co.uk).
Computation of atmospheric cooling rates by exact and approximate methods
NASA Technical Reports Server (NTRS)
Ridgway, William L.; HARSHVARDHAN; Arking, Albert
1991-01-01
Infrared fluxes and cooling rates for several standard model atmospheres, with and without water vapor, carbon dioxide, and ozone, have been calculated using a line-by-line method at 0.01/cm resolution. The sensitivity of the results to the vertical integration scheme and to the model for water vapor continuum absorption is shown. Comparison with similar calculations performed at NOAA/GFDL shows agreement to within 0.5 W/sq m in fluxes at various levels and 0.05 K/d in cooling rates. Comparison with a fast, parameterized radiation code used in climate models reveals a worst case difference, when all gases are included, of 3.7 W/sq m in flux; cooling rate differences are 0.1 K/d or less when integrated over a substantial layer with point differences as large as 0.3 K/d.
Lubrication approximation in completed double layer boundary element method
NASA Astrophysics Data System (ADS)
Nasseri, S.; Phan-Thien, N.; Fan, X.-J.
This paper reports on the results of the numerical simulation of the motion of solid spherical particles in shear Stokes flows. Using the completed double layer boundary element method (CDLBEM) via distributed computing under Parallel Virtual Machine (PVM), the effective viscosity of suspension has been calculated for a finite number of spheres in a cubic array, or in a random configuration. In the simulation presented here, the short range interactions via lubrication forces are also taken into account, via the range completer in the formulation, whenever the gap between two neighbouring particles is closer than a critical gap. The results for particles in a simple cubic array agree with the results of Nunan and Keller (1984) and Stoksian Dynamics of Brady etal. (1988). To evaluate the lubrication forces between particles in a random configuration, a critical gap of 0.2 of particle's radius is suggested and the results are tested against the experimental data of Thomas (1965) and empirical equation of Krieger-Dougherty (Krieger, 1972). Finally, the quasi-steady trajectories are obtained for time-varying configuration of 125 particles.
Algebraic filter approach for fast approximation of nonlinear tomographic reconstruction methods
NASA Astrophysics Data System (ADS)
Plantagie, Linda; Batenburg, Kees Joost
2015-01-01
We present a computational approach for fast approximation of nonlinear tomographic reconstruction methods by filtered backprojection (FBP) methods. Algebraic reconstruction algorithms are the methods of choice in a wide range of tomographic applications, yet they require significant computation time, restricting their usefulness. We build upon recent work on the approximation of linear algebraic reconstruction methods and extend the approach to the approximation of nonlinear reconstruction methods which are common in practice. We demonstrate that if a blueprint image is available that is sufficiently similar to the scanned object, our approach can compute reconstructions that approximate iterative nonlinear methods, yet have the same speed as FBP.
Convergence of hausdorff approximation methods for the Edgeworth-Pareto hull of a compact set
NASA Astrophysics Data System (ADS)
Efremov, R. V.
2015-11-01
The Hausdorff methods comprise an important class of polyhedral approximation methods for convex compact bodies, since they have an optimal convergence rate and possess other useful properties. The concept of Hausdorff methods is extended to a problem arising in multicriteria optimization, namely, to the polyhedral approximation of the Edgeworth-Pareto hull (EPH) of a convex compact set. It is shown that the sequences of polyhedral sets generated by Hausdorff methods converge to the EPH to be approximated. It is shown that the Estimate Refinement method, which is most frequently used to approximate the EPH of convex compact sets, is a Hausdorff method and, hence, generates sequences of sets converging to the EPH.
NASA Technical Reports Server (NTRS)
Funaro, D.; Gottlieb, D.
1988-01-01
A new method to impose boundary conditions for pseudospectral approximations to hyperbolic equations is suggested. This method involves the collocation of the equation at the boundary nodes as well as satisfying boundary conditions. Stability and convergence results are proven for the Chebyshev approximation of linear scalar hyperbolic equations. The eigenvalues of this method applied to parabolic equations are shown to be real and negative.
NASA Astrophysics Data System (ADS)
Pospelov, A. I.
2016-08-01
Adaptive methods for the polyhedral approximation of the convex Edgeworth-Pareto hull in multiobjective monotone integer optimization problems are proposed and studied. For these methods, theoretical convergence rate estimates with respect to the number of vertices are obtained. The estimates coincide in order with those for filling and augmentation H-methods intended for the approximation of nonsmooth convex compact bodies.
On the interpretation of large gravimagnetic data by the modified method of S-approximations
NASA Astrophysics Data System (ADS)
Stepanova, I. E.; Raevskiy, D. N.; Shchepetilov, A. V.
2017-01-01
The modified method of S-approximations applied for processing large and superlarge gravity and magnetic prospecting data is considered. The modified S-approximations of the elements of gravitational field are obtained due to the efficient block methods for solving the system of linear algebraic equations (SLAEs) to which the geophysically meaningful problem is reduced. The results of the mathematical experiment are presented.
The complex variable boundary element method: Applications in determining approximative boundaries
Hromadka, T.V.
1984-01-01
The complex variable boundary element method (CVBEM) is used to determine approximation functions for boundary value problems of the Laplace equation such as occurs in potential theory. By determining an approximative boundary upon which the CVBEM approximator matches the desired constant (level curves) boundary conditions, the CVBEM is found to provide the exact solution throughout the interior of the transformed problem domain. Thus, the acceptability of the CVBEM approximation is determined by the closeness-of-fit of the approximative boundary to the study problem boundary. ?? 1984.
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Hornby, Gregory; Ishihara, Abe
2013-01-01
This paper describes two methods of trajectory optimization to obtain an optimal trajectory of minimum-fuel- to-climb for an aircraft. The first method is based on the adjoint method, and the second method is based on a direct trajectory optimization method using a Chebyshev polynomial approximation and cubic spine approximation. The approximate optimal trajectory will be compared with the adjoint-based optimal trajectory which is considered as the true optimal solution of the trajectory optimization problem. The adjoint-based optimization problem leads to a singular optimal control solution which results in a bang-singular-bang optimal control.
Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method
NASA Technical Reports Server (NTRS)
Kowal, Michael T.
1997-01-01
The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.
Comparison of Finite Differences and WKB approximation Methods for PT symmetric complex potentials
NASA Astrophysics Data System (ADS)
Naceri, Leila; Chekkal, Meziane; Hammou, Amine B.
2016-10-01
We consider the one dimensional schrödinger eigenvalue problem on a finite domain (Strum-Liouville problem) for several PT-symmetric complex potentials, studied by Bender and Jones using the WKB approximation method. We make a comparison between the solutions of theses PT-symmetric complex potentials using both the finite difference method (FDM) and the WKB approximation method and show quantitative and qualitative agreement between the two methods.
The Subspace Projected Approximate Matrix (SPAM) modification of the Davidson method
Shepard, R.; Tilson, J.L.; Wagner, A.F.; Minkoff, M.
1997-12-31
A modification of the Davidson subspace expansion method, a Ritz approach, is proposed in which the expansion vectors are computed from a {open_quotes}cheap{close_quotes} approximating eigenvalue equation. This approximate eigenvalue equation is assembled using projection operators constructed from the subspace expansion vectors. The method may be implemented using an inner/outer iteration scheme, or it may be implemented by modifying the usual Davidson algorithm in such a way that exact and approximate matrix-vector product computations are intersperced. A multi-level algorithm is proposed in which several levels of approximate matrices are used.
Zhang, Zhenyue; Zha, Hongyuan; Simon, Horst
2006-07-31
In this paper, we developed numerical algorithms for computing sparse low-rank approximations of matrices, and we also provided a detailed error analysis of the proposed algorithms together with some numerical experiments. The low-rank approximations are constructed in a certain factored form with the degree of sparsity of the factors controlled by some user-specified parameters. In this paper, we cast the sparse low-rank approximation problem in the framework of penalized optimization problems. We discuss various approximation schemes for the penalized optimization problem which are more amenable to numerical computations. We also include some analysis to show the relations between the original optimization problem and the reduced one. We then develop a globally convergent discrete Newton-like iterative method for solving the approximate penalized optimization problems. We also compare the reconstruction errors of the sparse low-rank approximations computed by our new methods with those obtained using the methods in the earlier paper and several other existing methods for computing sparse low-rank approximations. Numerical examples show that the penalized methods are more robust and produce approximations with factors which have fewer columns and are sparser.
The Subspace Projected Approximate Matrix (SPAM) Modification of the Davidson Method
NASA Astrophysics Data System (ADS)
Shepard, Ron; Wagner, Albert F.; Tilson, Jeffrey L.; Minkoff, Michael
2001-09-01
A modification of the iterative matrix diagonalization method of Davidson is presented that is applicable to the symmetric eigenvalue problem. This method is based on subspace projections of a sequence of one or more approximate matrices. The purpose of these approximate matrices is to improve the efficiency of the solution of the desired eigenpairs by reducing the number of matrix-vector products that must be computed with the exact matrix. Several applications are presented. These are chosen to show the range of applicability of the method, the convergence behavior for a wide range of matrix types, and also the wide range of approaches that may be employed to generate approximate matrices.
NASA Technical Reports Server (NTRS)
La Budde, R. A.
1972-01-01
Sampling techniques have been used previously to evaluate Jacobian determinants that occur in classical mechanical descriptions of molecular scattering. These determinants also occur in the quasiclassical approximation. A new technique is described which can be used to evaluate Jacobian determinants which occur in either description. This method is expected to be valuable in the study of reactive scattering using the quasiclassical approximation.
NASA Astrophysics Data System (ADS)
Akhir, M. K. M.; Sulaiman, J.
2017-09-01
Weighted iterative methods particularly Accelerated Over Relaxation (AOR) method are used to solve linear system generated from triangle finite element approximation equation in solving 2D Helmholtz equation. The development of the AOR iterative method were also presented. Numerical experiments have been carried out and the results obtained confirm the superiority of the proposed iterative method
NASA Technical Reports Server (NTRS)
Hamilton, H. H., II
1982-01-01
An approximate method for calculating heating rates at general three dimensional stagnation points is presented. The application of the method for making stagnation point heating calculations during atmospheric entry is described. Comparisons with results from boundary layer calculations indicate that the method should provide an accurate method for engineering type design and analysis applications.
Extension of the weak-line approximation and application to correlated-k methods
Conley, A.J.; Collins, W.D.
2011-03-15
Global climate models require accurate and rapid computation of the radiative transfer through the atmosphere. Correlated-k methods are often used. One of the approximations used in correlated-k models is the weakline approximation. We introduce an approximation T/sub g/ which reduces to the weak-line limit when optical depths are small, and captures the deviation from the weak-line limit as the extinction deviates from the weak-line limit. This approximation is constructed to match the first two moments of the gamma distribution to the k-distribution of the transmission. We compare the errors of the weak-line approximation with T/sub g/ in the context of a water vapor spectrum. The extension T/sub g/ is more accurate and converges more rapidly than the weak-line approximation.
Test particle propagation in magnetostatic turbulence. 2: The local approximation method
NASA Technical Reports Server (NTRS)
Klimas, A. J.; Sandri, G.; Scudder, J. D.; Howell, D. R.
1976-01-01
An approximation method for statistical mechanics is presented and applied to a class of problems which contains a test particle propagation problem. All of the available basic equations used in statistical mechanics are cast in the form of a single equation which is integrodifferential in time and which is then used as the starting point for the construction of the local approximation method. Simplification of the integrodifferential equation is achieved through approximation to the Laplace transform of its kernel. The approximation is valid near the origin in the Laplace space and is based on the assumption of small Laplace variable. No other small parameter is necessary for the construction of this approximation method. The n'th level of approximation is constructed formally, and the first five levels of approximation are calculated explicitly. It is shown that each level of approximation is governed by an inhomogeneous partial differential equation in time with time independent operator coefficients. The order in time of these partial differential equations is found to increase as n does. At n = 0 the most local first order partial differential equation which governs the Markovian limit is regained.
Efficiency of the estimate refinement method for polyhedral approximation of multidimensional balls
NASA Astrophysics Data System (ADS)
Kamenev, G. K.
2016-05-01
The estimate refinement method for the polyhedral approximation of convex compact bodies is analyzed. When applied to convex bodies with a smooth boundary, this method is known to generate polytopes with an optimal order of growth of the number of vertices and facets depending on the approximation error. In previous studies, for the approximation of a multidimensional ball, the convergence rates of the method were estimated in terms of the number of faces of all dimensions and the cardinality of the facial structure (the norm of the f-vector) of the constructed polytope was shown to have an optimal rate of growth. In this paper, the asymptotic convergence rate of the method with respect to faces of all dimensions is compared with the convergence rate of best approximation polytopes. Explicit expressions are obtained for the asymptotic efficiency, including the case of low dimensions. Theoretical estimates are compared with numerical results.
Evaluation of the successive approximations method for acoustic streaming numerical simulations.
Catarino, S O; Minas, G; Miranda, J M
2016-05-01
This work evaluates the successive approximations method commonly used to predict acoustic streaming by comparing it with a direct method. The successive approximations method solves both the acoustic wave propagation and acoustic streaming by solving the first and second order Navier-Stokes equations, ignoring the first order convective effects. This method was applied to acoustic streaming in a 2D domain and the results were compared with results from the direct simulation of the Navier-Stokes equations. The velocity results showed qualitative agreement between both methods, which indicates that the successive approximations method can describe the formation of flows with recirculation. However, a large quantitative deviation was observed between the two methods. Further analysis showed that the successive approximation method solution is sensitive to the initial flow field. The direct method showed that the instantaneous flow field changes significantly due to reflections and wave interference. It was also found that convective effects contribute significantly to the wave propagation pattern. These effects must be taken into account when solving the acoustic streaming problems, since it affects the global flow. By adequately calculating the initial condition for first order step, the acoustic streaming prediction by the successive approximations method can be improved significantly.
Approximation and inference methods for stochastic biochemical kinetics—a tutorial review
NASA Astrophysics Data System (ADS)
Schnoerr, David; Sanguinetti, Guido; Grima, Ramon
2017-03-01
Stochastic fluctuations of molecule numbers are ubiquitous in biological systems. Important examples include gene expression and enzymatic processes in living cells. Such systems are typically modelled as chemical reaction networks whose dynamics are governed by the chemical master equation. Despite its simple structure, no analytic solutions to the chemical master equation are known for most systems. Moreover, stochastic simulations are computationally expensive, making systematic analysis and statistical inference a challenging task. Consequently, significant effort has been spent in recent decades on the development of efficient approximation and inference methods. This article gives an introduction to basic modelling concepts as well as an overview of state of the art methods. First, we motivate and introduce deterministic and stochastic methods for modelling chemical networks, and give an overview of simulation and exact solution methods. Next, we discuss several approximation methods, including the chemical Langevin equation, the system size expansion, moment closure approximations, time-scale separation approximations and hybrid methods. We discuss their various properties and review recent advances and remaining challenges for these methods. We present a comparison of several of these methods by means of a numerical case study and highlight some of their respective advantages and disadvantages. Finally, we discuss the problem of inference from experimental data in the Bayesian framework and review recent methods developed the literature. In summary, this review gives a self-contained introduction to modelling, approximations and inference methods for stochastic chemical kinetics.
NASA Astrophysics Data System (ADS)
Lanti, E.; Dominski, J.; Brunner, S.; McMillan, B. F.; Villard, L.
2016-11-01
This work aims at completing the implementation of a solver for the quasineutrality equation using a Padé approximation in the global gyrokinetic code ORB5. Initially [Dominski, Ph.D. thesis, 2016], the Pade approximation was only implemented for the kinetic electron model. To enable runs with adiabatic or hybrid electron models while using a Pade approximation to the polarization response, the adiabatic response term of the quasi-neutrality equation must be consistently modified. It is shown that the Pade solver is in good agreement with the arbitrary wavelength solver of ORB5 [Dominski, Ph.D. thesis, 2016]. To perform this verification, the linear dispersion relation of an ITG-TEM transition is computed for both solvers and the linear growth rates and frequencies are compared.
NASA Astrophysics Data System (ADS)
Kamenev, G. K.
2015-10-01
The estimate refinement method for the polyhedral approximation of convex compact bodies is considered. In the approximation of convex bodies with a smooth boundary, this method is known to generate polytopes with an optimal order of growth of the number of vertices and facets depending on the approximation error. The properties of the method are examined as applied to the polyhedral approximation of a multidimensional ball. As vertices of approximating polytopes, the method is shown to generate a deep holes sequence on the surface of the ball. As a result, previously obtained combinatorial properties of convex hulls of the indicated sequences, namely, the convergence rates with respect to the number of faces of all dimensions and the optimal growth of the cardinality of the facial structure (of the norm of the f-vector) can be extended to such polytopes. The combinatorial properties of the approximating polytopes generated by the estimate refinement method are compared to the properties of polytopes with a facial structure of extremal cardinality. It is shown that the polytopes generated by the method are similar to stacked polytopes, on which the minimum number of faces of all dimensions is attained for a given number of vertices.
Gait Generation for a Small Biped Robot using Approximated Optimization Method
NASA Astrophysics Data System (ADS)
Nguyen, Tinh; Tao, Linh; Hasegawa, Hiroshi
2016-11-01
This paper proposes a novel approach for gait pattern generation of a small biped robot to enhance its walking behavior. This is to aim to make the robot gait more natural and more stable in the walking process. In this study, we mention the approximated optimization method which applied the Differential Evolution algorithm (DE) to objective function approximated by Artificial Neural Network (ANN). In addition, we also present a new humanlike foot structure with toes for the biped robot in this paper. To evaluate this method achievement, the robot was simulated by multi-body dynamics simulation software, Adams (MSC software, USA). As a result, we confirmed that the biped robot with the proposed foot structure can walk naturally. The approximated optimization method based on DE algorithm and ANN is an effective approach to generate a gait pattern for the locomotion of the biped robot. This method is simpler than the conventional methods using Zero Moment Point (ZMP) criterion.
Viscosity approximation methods for a finite family of nonexpansive mappings in Banach spaces
NASA Astrophysics Data System (ADS)
Chang, Shih-Sen
2006-11-01
By using viscosity approximation methods for a finite family of nonexpansive mappings in Banach spaces, some sufficient and necessary conditions for the iterative sequence to converging to a common fixed point are obtained. The results presented in the paper extend and improve some recent results in [H.K. Xu, Viscosity approximation methods for nonexpansive mappings, J. Math. Anal. Appl. 298 (2004) 279-291; H.K. Xu, Remark on an iterative method for nonexpansive mappings, Comm. Appl. Nonlinear Anal. 10 (2003) 67-75; H.H. Bauschke, The approximation of fixed points of compositions of nonexpansive mappings in Banach spaces, J. Math. Anal. Appl. 202 (1996) 150-159; B. Halpern, Fixed points of nonexpansive maps, Bull. Amer. Math. Soc. 73 (1967) 957-961; J.S. Jung, Iterative approaches to common fixed points of nonexpansive mappings in Banach spaces, J. Math. Anal. Appl. 302 (2005) 509-520; P.L. Lions, Approximation de points fixes de contractions', C. R. Acad. Sci. Paris Ser. A 284 (1977) 1357-1359; A. Moudafi, Viscosity approximation methods for fixed point problems, J. Math. Anal. Appl. 241 (2000) 46-55; S. Reich, Strong convergence theorems for resolvents of accretive operators in Banach spaces, J. Math. Anal. Appl. 75 (1980) 128-292; R. Wittmann, Approximation of fixed points of nonexpansive mappings, Arch. Math. 58 (1992) 486-491].
İbiş, Birol
2014-01-01
This paper aims to obtain the approximate solution of time-fractional advection-dispersion equation (FADE) involving Jumarie's modification of Riemann-Liouville derivative by the fractional variational iteration method (FVIM). FVIM provides an analytical approximate solution in the form of a convergent series. Some examples are given and the results indicate that the FVIM is of high accuracy, more efficient, and more convenient for solving time FADEs. PMID:24578662
ERIC Educational Resources Information Center
Moses, Tim
2013-01-01
The purpose of this study was to evaluate the use of adjoined and piecewise linear approximations (APLAs) of raw equipercentile equating functions as a postsmoothing equating method. APLAs are less familiar than other postsmoothing equating methods (i.e., cubic splines), but their use has been described in historical equating practices of…
Numerical solution of 2D-vector tomography problem using the method of approximate inverse
Svetov, Ivan; Maltseva, Svetlana; Polyakova, Anna
2016-08-10
We propose a numerical solution of reconstruction problem of a two-dimensional vector field in a unit disk from the known values of the longitudinal and transverse ray transforms. The algorithm is based on the method of approximate inverse. Numerical simulations confirm that the proposed method yields good results of reconstruction of vector fields.
Numerical solution of 2D-vector tomography problem using the method of approximate inverse
NASA Astrophysics Data System (ADS)
Svetov, Ivan; Maltseva, Svetlana; Polyakova, Anna
2016-08-01
We propose a numerical solution of reconstruction problem of a two-dimensional vector field in a unit disk from the known values of the longitudinal and transverse ray transforms. The algorithm is based on the method of approximate inverse. Numerical simulations confirm that the proposed method yields good results of reconstruction of vector fields.
ERIC Educational Resources Information Center
Moses, Tim
2013-01-01
The purpose of this study was to evaluate the use of adjoined and piecewise linear approximations (APLAs) of raw equipercentile equating functions as a postsmoothing equating method. APLAs are less familiar than other postsmoothing equating methods (i.e., cubic splines), but their use has been described in historical equating practices of…
Approximation methods for control of structural acoustics models with piezoceramic actuators
NASA Technical Reports Server (NTRS)
Banks, H. T.; Fang, W.; Silcox, R. J.; Smith, R. C.
1993-01-01
The active control of acoustic pressure in a 2-D cavity with a flexible boundary (a beam) is considered. Specifically, this control is implemented via piezoceramic patches on the beam which produces pure bending moments. The incorporation of the feedback control in this manner leads to a system with an unbounded input term. Approximation methods in this manner leads to a system with an unbounded input term. Approximation methods in this manner leads to a system with an unbounded input team. Approximation methods in the context of linear quadratic regulator (LQR) state space control formulation are discussed and numerical results demonstrating the effectiveness of this approach in computing feedback controls for noise reduction are presented.
Quantum Approximate Methods for the Atomistic Modeling of Multicomponent Alloys. Chapter 7
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Garces, Jorge; Mosca, Hugo; Gargano, pablo; Noebe, Ronald D.; Abel, Phillip
2007-01-01
This chapter describes the role of quantum approximate methods in the understanding of complex multicomponent alloys at the atomic level. The need to accelerate materials design programs based on economical and efficient modeling techniques provides the framework for the introduction of approximations and simplifications in otherwise rigorous theoretical schemes. As a promising example of the role that such approximate methods might have in the development of complex systems, the BFS method for alloys is presented and applied to Ru-rich Ni-base superalloys and also to the NiAI(Ti,Cu) system, highlighting the benefits that can be obtained from introducing simple modeling techniques to the investigation of such complex systems.
Quantum Approximate Methods for the Atomistic Modeling of Multicomponent Alloys. Chapter 7
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Garces, Jorge; Mosca, Hugo; Gargano, pablo; Noebe, Ronald D.; Abel, Phillip
2007-01-01
This chapter describes the role of quantum approximate methods in the understanding of complex multicomponent alloys at the atomic level. The need to accelerate materials design programs based on economical and efficient modeling techniques provides the framework for the introduction of approximations and simplifications in otherwise rigorous theoretical schemes. As a promising example of the role that such approximate methods might have in the development of complex systems, the BFS method for alloys is presented and applied to Ru-rich Ni-base superalloys and also to the NiAI(Ti,Cu) system, highlighting the benefits that can be obtained from introducing simple modeling techniques to the investigation of such complex systems.
Magnetic interface forward and inversion method based on Padé approximation
NASA Astrophysics Data System (ADS)
Zhang, Chong; Huang, Da-Nian; Zhang, Kai; Pu, Yi-Tao; Yu, Ping
2016-12-01
The magnetic interface forward and inversion method is realized using the Taylor series expansion to linearize the Fourier transform of the exponential function. With a large expansion step and unbounded neighborhood, the Taylor series is not convergent, and therefore, this paper presents the magnetic interface forward and inversion method based on Padé approximation instead of the Taylor series expansion. Compared with the Taylor series, Padé's expansion's convergence is more stable and its approximation more accurate. Model tests show the validity of the magnetic forward modeling and inversion of Padé approximation proposed in the paper, and when this inversion method is applied to the measured data of the Matagami area in Canada, a stable and reasonable distribution of underground interface is obtained.
NASA Technical Reports Server (NTRS)
Connor, J. N. L.; Curtis, P. R.; Farrelly, D.
1984-01-01
Methods that can be used in the numerical implementation of the uniform swallowtail approximation are described. An explicit expression for that approximation is presented to the lowest order, showing that there are three problems which must be overcome in practice before the approximation can be applied to any given problem. It is shown that a recently developed quadrature method can be used for the accurate numerical evaluation of the swallowtail canonical integral and its partial derivatives. Isometric plots of these are presented to illustrate some of their properties. The problem of obtaining the arguments of the swallowtail integral from an analytical function of its argument is considered, describing two methods of solving this problem. The asymptotic evaluation of the butterfly canonical integral is addressed.
Simulation of mass transfer during osmotic dehydration of apple: a power law approximation method
NASA Astrophysics Data System (ADS)
Abbasi Souraki, B.; Tondro, H.; Ghavami, M.
2014-10-01
In this study, unsteady one-dimensional mass transfer during osmotic dehydration of apple was modeled using an approximate mathematical model. The mathematical model has been developed based on a power law profile approximation for moisture and solute concentrations in the spatial direction. The proposed model was validated by the experimental water loss and solute gain data, obtained from osmotic dehydration of infinite slab and cylindrical shape samples of apple in sucrose solutions (30, 40 and 50 % w/w), at different temperatures (30, 40 and 50 °C). The proposed model's predictions were also compared with the exact analytical and also a parabolic approximation model's predictions. The values of mean relative errors respect to the experimental data were estimated between 4.5 and 8.1 %, 6.5 and 10.2 %, and 15.0 and 19.1 %, for exact analytical, power law and parabolic approximation methods, respectively. Although the parabolic approximation leads to simpler relations, the power law approximation method results in higher accuracy of average concentrations over the whole domain of dehydration time. Considering both simplicity and precision of the mathematical models, the power law model for short dehydration times and the simplified exact analytical model for long dehydration times could be used for explanation of the variations of the average water loss and solute gain in the whole domain of dimensionless times.
Laplace transform homotopy perturbation method for the approximation of variational problems.
Filobello-Nino, U; Vazquez-Leal, H; Rashidi, M M; Sedighi, H M; Perez-Sesma, A; Sandoval-Hernandez, M; Sarmiento-Reyes, A; Contreras-Hernandez, A D; Pereyra-Diaz, D; Hoyos-Reyes, C; Jimenez-Fernandez, V M; Huerta-Chua, J; Castro-Gonzalez, F; Laguna-Camacho, J R
2016-01-01
This article proposes the application of Laplace Transform-Homotopy Perturbation Method and some of its modifications in order to find analytical approximate solutions for the linear and nonlinear differential equations which arise from some variational problems. As case study we will solve four ordinary differential equations, and we will show that the proposed solutions have good accuracy, even we will obtain an exact solution. In the sequel, we will see that the square residual error for the approximate solutions, belongs to the interval [0.001918936920, 0.06334882582], which confirms the accuracy of the proposed methods, taking into account the complexity and difficulty of variational problems.
NASA Astrophysics Data System (ADS)
Abedini, Mohammad; Nojoumian, Mohammad Ali; Salarieh, Hassan; Meghdari, Ali
2015-08-01
In this paper, model reference control of a fractional order system has been discussed. In order to control the fractional order plant, discrete-time approximation methods have been applied. Plant and reference model are discretized by Grünwald-Letnikov definition of the fractional order derivative using "Short Memory Principle". Unknown parameters of the fractional order system are appeared in the discrete time approximate model as combinations of parameters of the main system. The discrete time MRAC via RLS identification is modified to estimate the parameters and control the fractional order plant. Numerical results show the effectiveness of the proposed method of model reference adaptive control.
An Extension of the Krieger-Li-Iafrate Approximation to the Optimized-Effective-Potential Method
Wilson, B.G.
1999-11-11
The Krieger-Li-Iafrate approximation can be expressed as the zeroth order result of an unstable iterative method for solving the integral equation form of the optimized-effective-potential method. By pre-conditioning the iterate a first order correction can be obtained which recovers the bulk of quantal oscillations missing in the zeroth order approximation. A comparison of calculated total energies are given with Krieger-Li-Iafrate, Local Density Functional, and Hyper-Hartree-Fock results for non-relativistic atoms and ions.
NASA Technical Reports Server (NTRS)
Mier Muth, A. M.; Willsky, A. S.
1978-01-01
In this paper we describe a method for approximating a waveform by a spline. The method is quite efficient, as the data are processed sequentially. The basis of the approach is to view the approximation problem as a question of estimation of a polynomial in noise, with the possibility of abrupt changes in the highest derivative. This allows us to bring several powerful statistical signal processing tools into play. We also present some initial results on the application of our technique to the processing of electrocardiograms, where the knot locations themselves may be some of the most important pieces of diagnostic information.
Approximation methods for control of acoustic/structure models with piezoceramic actuators
NASA Technical Reports Server (NTRS)
Banks, H. T.; Fang, W.; Silcox, R. J.; Smith, R. C.
1991-01-01
The active control of acoustic pressure in a 2-D cavity with a flexible boundary (a beam) is considered. Specifically, this control is implemented via piezoceramic patches on the beam which produces pure bending moments. The incorporation of the feedback control in this manner leads to a system with an unbounded input term. Approximation methods in this manner leads to a system with an unbounded input term. Approximation methods in the context of linear quadratic regulator (LQR) state space control formulation are discussed and numerical results demonstrating the effectiveness of this approach in computing feedback controls for noise reduction are presented.
NASA Astrophysics Data System (ADS)
Lai, Xian-Jing; Cai, Xiao-Ou
2010-09-01
In this paper, the decomposition method is implemented for solving the bidirectional Sawada- Kotera (bSK) equation with two kinds of initial conditions. As a result, the Adomian polynomials have been calculated and the approximate and exact solutions of the bSK equation are obtained by means of Maple, such as solitary wave solutions, doubly-periodic solutions, two-soliton solutions. Moreover, we compare the approximate solution with the exact solution in a table and analyze the absolute error and the relative error. The results reported in this article provide further evidence of the usefulness of the Adomian decomposition method for obtaining solutions of nonlinear problems
Global collocation methods for approximation and the solution of partial differential equations
NASA Technical Reports Server (NTRS)
Solomonoff, A.; Turkel, E.
1986-01-01
Polynomial interpolation methods are applied both to the approximation of functions and to the numerical solutions of hyperbolic and elliptic partial differential equations. The derivative matrix for a general sequence of the collocation points is constructed. The approximate derivative is then found by a matrix times vector multiply. The effects of several factors on the performance of these methods including the effect of different collocation points are then explored. The resolution of the schemes for both smooth functions and functions with steep gradients or discontinuities in some derivative are also studied. The accuracy when the gradients occur both near the center of the region and in the vicinity of the boundary is investigated. The importance of the aliasing limit on the resolution of the approximation is investigated in detail. Also examined is the effect of boundary treatment on the stability and accuracy of the scheme.
Rational approximations from power series of vector-valued meromorphic functions
NASA Technical Reports Server (NTRS)
Sidi, Avram
1992-01-01
Let F(z) be a vector-valued function, F: C yields C(sup N), which is analytic at z = 0 and meromorphic in a neighborhood of z = 0, and let its Maclaurin series be given. In this work we developed vector-valued rational approximation procedures for F(z) by applying vector extrapolation methods to the sequence of partial sums of its Maclaurin series. We analyzed some of the algebraic and analytic properties of the rational approximations thus obtained, and showed that they were akin to Pade approximations. In particular, we proved a Koenig type theorem concerning their poles and a de Montessus type theorem concerning their uniform convergence. We showed how optical approximations to multiple poles and to Laurent expansions about these poles can be constructed. Extensions of the procedures above and the accompanying theoretical results to functions defined in arbitrary linear spaces was also considered. One of the most interesting and immediate applications of the results of this work is to the matrix eigenvalue problem. In a forthcoming paper we exploited the developments of the present work to devise bona fide generalizations of the classical power method that are especially suitable for very large and sparse matrices. These generalizations can be used to approximate simultaneously several of the largest distinct eigenvalues and corresponding eigenvectors and invariant subspaces of arbitrary matrices which may or may not be diagonalizable, and are very closely related with known Krylov subspace methods.
Improved Parker's method for topographic models using Chebyshev series and low rank approximation
NASA Astrophysics Data System (ADS)
Wu, Leyuan; Lin, Qiang
2017-03-01
We present a new method to improve the convergence of the well-known Parker's formula for the modelling of gravity and magnetic fields caused by sources with complex topography. In the original Parker's formula, two approximations are made, which may cause considerable numerical errors and instabilities: 1) the approximation of the forward and inverse continuous Fourier transforms using their discrete counterparts, the forward and inverse Fast Fourier Transform (FFT) algorithms; 2) the approximation of the exponential function with its Taylor series expansion. In a previous paper of ours, we have made an effort addressing the first problem by applying the Gauss-FFT method instead of the standard FFT algorithm. The new Gauss-FFT based method shows improved numerical efficiency and agrees well with space-domain analytical or hybrid analytical-numerical algorithms. However, even under the simplifying assumption of a calculation surface being a level plane above all topographic sources, the method may still fail or become inaccurate under certain circumstances. When the peaks of the topography approach the observation surface too closely, the number of terms of the Taylor series expansion needed to reach a suitable precision becomes large and slows the calculation. We show in this paper that this problem is caused by the second approximation mentioned above, and it is due to the convergence property of the Taylor series expansion that the algorithm becomes inaccurate for certain topographic models with large amplitudes. Based on this observation, we present a modified Parker's method using low rank approximation (LRA) of the exponential function in virtue of the Chebfun software system. In this way, the optimal rate of convergence is achieved. Some pre-computation is needed but will not cause significant computational overheads. Synthetic and real model tests show that the method now works well for almost any practical topographic model, provided that the assumption
Improved Parker's method for topographic models using Chebyshev series and low rank approximation
NASA Astrophysics Data System (ADS)
Wu, Leyuan; Lin, Qiang
2017-05-01
We present a new method to improve the convergence of the well-known Parker's formula for the modelling of gravity and magnetic fields caused by sources with complex topography. In the original Parker's formula, two approximations are made, which may cause considerable numerical errors and instabilities: (1) the approximation of the forward and inverse continuous Fourier transforms using their discrete counterparts, the forward and inverse Fast Fourier Transform (FFT) algorithms; (2) the approximation of the exponential function with its Taylor series expansion. In a previous paper of ours, we have made an effort addressing the first problem by applying the Gauss-FFT method instead of the standard FFT algorithm. The new Gauss-FFT based method shows improved numerical efficiency and agrees well with space-domain analytical or hybrid analytical-numerical algorithms. However, even under the simplifying assumption of a calculation surface being a level plane above all topographic sources, the method may still fail or become inaccurate under certain circumstances. When the peaks of the topography approach the observation surface too closely, the number of terms of the Taylor series expansion needed to reach a suitable precision becomes large and slows the calculation. We show in this paper that this problem is caused by the second approximation mentioned above, and it is due to the convergence property of the Taylor series expansion that the algorithm becomes inaccurate for certain topographic models with large amplitudes. Based on this observation, we present a modified Parker's method using low rank approximation of the exponential function in virtue of the Chebfun software system. In this way, the optimal rate of convergence is achieved. Some pre-computation is needed but will not cause significant computational overheads. Synthetic and real model tests show that the method now works well for almost any practical topographic model, provided that the assumption, that
An approximate method for solution to variable moment of inertia problems
NASA Technical Reports Server (NTRS)
Beans, E. W.
1981-01-01
An approximation method is presented for reducing a nonlinear differential equation (for the 'weather vaning' motion of a wind turbine) to an equivalent constant moment of inertia problem. The integrated average of the moment of inertia is determined. Cycle time was found to be the equivalent cycle time if the rotating speed is 4 times greater than the system's minimum natural frequency.
NASA Astrophysics Data System (ADS)
Shimelevich, M. I.; Obornev, E. A.; Obornev, I. E.; Rodionov, E. A.
2017-07-01
The iterative approximation neural network method for solving conditionally well-posed nonlinear inverse problems of geophysics is presented. The method is based on the neural network approximation of the inverse operator. The inverse problem is solved in the class of grid (block) models of the medium on a regularized parameterization grid. The construction principle of this grid relies on using the calculated values of the continuity modulus of the inverse operator and its modifications determining the degree of ambiguity of the solutions. The method provides approximate solutions of inverse problems with the maximal degree of detail given the specified degree of ambiguity with the total number of the sought parameters n × 103 of the medium. The a priori and a posteriori estimates of the degree of ambiguity of the approximated solutions are calculated. The work of the method is illustrated by the example of the three-dimensional (3D) inversion of the synthesized 2D areal geoelectrical (audio magnetotelluric sounding, AMTS) data corresponding to the schematic model of a kimberlite pipe.
Comparing methods for the approximation of rainfall fields in environmental applications
NASA Astrophysics Data System (ADS)
Patané, G.; Cerri, A.; Skytt, V.; Pittaluga, S.; Biasotti, S.; Sobrero, D.; Dokken, T.; Spagnuolo, M.
2017-05-01
Digital environmental data are becoming commonplace and the amount of information they provide is complex to process, due to the size, variety, and dynamic nature of the data captured by sensing devices. The paper discusses an evaluation framework for comparing methods to approximate observed rain data, in real conditions of sparsity of the observations. The novelty brought by this experimental study stands in the geographical area and heterogeneity of the data used for evaluation, aspects which challenge all approximation methods. The Liguria region, located in the north-west of Italy, is a complex area for the orography and the closeness to the sea, which cause complex hydro-meteorological events. The observed rain data are highly heterogeneous: two data sets come from measured rain gathered from two different rain gauge networks, with different characteristics and spatial distributions over the Liguria region; the third data set come from weather radar, with a more regular coverage of the same region but a different veracity. Finally, another novelty of the paper is brought by the proposal of an application-oriented perspective on the comparison. The approximation models the rain field, whose maxima and their evolution is essential for an effective monitoring of meteorological events. Therefore, we adapt a storm tracking technique to the analysis of the displacement of maxima computed by the different methods, used as a dissimilarity measure among the approximation methods analyzed.
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Adams, William M., Jr.
1988-01-01
The approximation of unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft are discussed. Two methods of formulating these approximations are extended to include the same flexibility in constraining the approximations and the same methodology in optimizing nonlinear parameters as another currently used extended least-squares method. Optimal selection of nonlinear parameters is made in each of the three methods by use of the same nonlinear, nongradient optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is lower order than that required when no optimization of the nonlinear terms is performed. The free linear parameters are determined using the least-squares matrix techniques of a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from different approaches are described and results are presented that show comparative evaluations from application of each of the extended methods to a numerical example.
An Iterative Pixel-Level Image Matching Method for Mars Mapping Using Approximate Orthophotos
NASA Astrophysics Data System (ADS)
Geng, X.; Xu, Q.; Lan, C. Z.; Xing, S.
2017-07-01
Mars mapping is essential to the scientific research of the red planet. The special terrain characteristics of Martian surface can be used to develop the targeted image matching method. In this paper, in order to generate high resolution Mars DEM, a pixel-level image matching method for Mars orbital pushbroom images is proposed. The main strategies of our method include: (1) image matching on approximate orthophotos; (2) estimating approximate value of conjugate points by using ground point coordinates of orthophotos; (3) hierarchical image matching; (4) generating DEM and approximate orthophotos at each pyramid level; (5) fast transformation from ground points to image points for pushbroom images. The derived DEM at each pyramid level is used as reference data for the generation of approximate orthophotos at the next pyramid level. With iterative processing, the generated DEM becomes more and more accurate and a very small search window is precise enough for the determination of conjugate points. The images acquired by High Resolution Stereo Camera (HRSC) on European Mars Express were used to verify our method's feasibility. Experiment results demonstrate that accurate DEM data can be derived with an acceptable time cost by pixel-level image matching.
ERIC Educational Resources Information Center
Hummel, Thomas J.; Johnston, Charles B.
This research investigates stochastic approximation procedures of the Robbins-Monro type. Following a brief introduction to sequential experimentation, attention is focused on formal methods for selecting successive values of a single independent variable. Empirical results obtained through computer simulation are used to compare several formal…
An analytical technique for approximating unsteady aerodynamics in the time domain
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1980-01-01
An analytical technique is presented for approximating unsteady aerodynamic forces in the time domain. The order of elements of a matrix Pade approximation was postulated, and the resulting polynomial coefficients were determined through a combination of least squares estimates for the numerator coefficients and a constrained gradient search for the denominator coefficients which insures stable approximating functions. The number of differential equations required to represent the aerodynamic forces to a given accuracy tends to be smaller than that employed in certain existing techniques where the denominator coefficients are chosen a priori. Results are shown for an aeroelastic, cantilevered, semispan wing which indicate a good fit to the aerodynamic forces for oscillatory motion can be achieved with a matrix Pade approximation having fourth order numerator and second order denominator polynomials.
Approximate Solution Methods for Spectral Radiative Transfer in High Refractive Index Layers
NASA Technical Reports Server (NTRS)
Siegel, R.; Spuckler, C. M.
1994-01-01
Some ceramic materials for high temperature applications are partially transparent for radiative transfer. The refractive indices of these materials can be substantially greater than one which influences internal radiative emission and reflections. Heat transfer behavior of single and laminated layers has been obtained in the literature by numerical solutions of the radiative transfer equations coupled with heat conduction and heating at the boundaries by convection and radiation. Two-flux and diffusion methods are investigated here to obtain approximate solutions using a simpler formulation than required for exact numerical solutions. Isotropic scattering is included. The two-flux method for a single layer yields excellent results for gray and two band spectral calculations. The diffusion method yields a good approximation for spectral behavior in laminated multiple layers if the overall optical thickness is larger than about ten. A hybrid spectral model is developed using the two-flux method in the optically thin bands, and radiative diffusion in bands that are optically thick.
A numerical method for approximating antenna surfaces defined by discrete surface points
NASA Technical Reports Server (NTRS)
Lee, R. Q.; Acosta, R.
1985-01-01
A simple numerical method for the quadratic approximation of a discretely defined reflector surface is described. The numerical method was applied to interpolate the surface normal of a parabolic reflector surface from a grid of nine closest surface points to the point of incidence. After computing the surface normals, the geometrical optics and the aperture integration method using the discrete Fast Fourier Transform (FFT) were applied to compute the radiaton patterns for a symmetric and an offset antenna configurations. The computed patterns are compared to that of the analytic case and to the patterns generated from another numerical technique using the spline function approximation. In the paper, examples of computations are given. The accuracy of the numerical method is discussed.
NASA Astrophysics Data System (ADS)
Lehikoinen, A.; Finsterle, S.; Voutilainen, A.; Kowalsky, M.; Kaipio, J.
2006-12-01
We present a new methodology for imaging the evolution of electrically conductive fluids in porous media. The state estimation problem is formulated in terms of an evolution-observation model, and the estimates are obtained via Bayesian filtering. The approach is based on an extended Kalman filter algorithm and includes an approximation error method to model uncertainties in the evolution and observation models. The example we consider involves the imaging of time-varying distributions of water saturation in porous media using time-lapse electrical resistance tomography (ERT). The evolution model we employ is a simplified model for simulating flow through partially saturated porous media. The complete electrode model (with Archie's law relating saturations to electrical conductivity) is used as the observation model. We propose to account for approximation errors in the evolution and observation models by constructing a statistical model of the differences between the "accurate" and "approximate" representations of fluid flow, and by including this information in the calculation of the posterior probability density of the estimated system state. The proposed method provides improved estimates of water saturation distribution relative to traditional reconstruction schemes that rely on conventional stabilization methods (e.g., using a smoothness prior) and relative to the extended Kalman filter without the approximation error method incorporated. Finally, the approximation error method allows for the use of a simplified and computationally efficient evolution model in the state estimation scheme. This work was supported, in part, by the Finnish Funding Agency for Technology and Innovation (TEKES), projects 40285/05 and 40347/05, and by the U.S. Dept. of Energy under Contract No. DE-AC02- 05CH11231.
NASA Astrophysics Data System (ADS)
Alam Khan, Najeeb; Razzaq, Oyoon Abdul
2016-03-01
In the present work a wavelets approximation method is employed to solve fuzzy boundary value differential equations (FBVDEs). Essentially, a truncated Legendre wavelets series together with the Legendre wavelets operational matrix of derivative are utilized to convert FB- VDE into a simple computational problem by reducing it into a system of fuzzy algebraic linear equations. The capability of scheme is investigated on second order FB- VDE considered under generalized H-differentiability. Solutions are represented graphically showing competency and accuracy of this method.
Approximate method of free energy calculation for spin system with arbitrary connection matrix
NASA Astrophysics Data System (ADS)
Kryzhanovsky, Boris; Litinskii, Leonid
2015-01-01
The proposed method of the free energy calculation is based on the approximation of the energy distribution in the microcanonical ensemble by the Gaussian distribution. We hope that our approach will be effective for the systems with long-range interaction, where large coordination number q ensures the correctness of the central limit theorem application. However, the method provides good results also for systems with short-range interaction when the number q is not so large.
A method for solving stochastic equations by reduced order models and local approximations
Grigoriu, M.
2012-08-01
A method is proposed for solving equations with random entries, referred to as stochastic equations (SEs). The method is based on two recent developments. The first approximates the response surface giving the solution of a stochastic equation as a function of its random parameters by a finite set of hyperplanes tangent to it at expansion points selected by geometrical arguments. The second approximates the vector of random parameters in the definition of a stochastic equation by a simple random vector, referred to as stochastic reduced order model (SROM), and uses it to construct a SROM for the solution of this equation. The proposed method is a direct extension of these two methods. It uses SROMs to select expansion points, rather than selecting these points by geometrical considerations, and represents the solution by linear and/or higher order local approximations. The implementation and the performance of the method are illustrated by numerical examples involving random eigenvalue problems and stochastic algebraic/differential equations. The method is conceptually simple, non-intrusive, efficient relative to classical Monte Carlo simulation, accurate, and guaranteed to converge to the exact solution.
Tejero, E. M.; Gatling, G.
2009-03-15
A method for approximating arbitrary axial magnetic field profiles for a given solenoidal electromagnet coil array is described. The method casts the individual contributions from each coil as a truncated orthonormal basis for the space within the array. This truncated basis allows for the linear decomposition of an arbitrary profile function, which returns the appropriate currents for each coil to best reproduce the desired profile. We present the mathematical details of the method along with a detailed example of its use. The results from the method are used in a simulation and compared with magnetic field measuremen0008.
NASA Technical Reports Server (NTRS)
Grantz, A. C.; Dejarnette, F. R.; Thompson, R. A.
1989-01-01
The approximate axisymmetric method presented for accurately calculating the surface and flowfield properties of fully viscous hypersonic flow over blunt-nosed bodies incorporates the turbulence model of Cebeci-Smith (1970) and the equilibrium air tables of Hansen (1959). The method is faster than the parabolized Navier-Stokes or viscous shock layer solvers that it could replace for preliminary design determinations. Surface heat transfer and pressure predictions for the present method are comparable with the more accurate viscous shock layer method as well as flight test and wind tunnel data. A starting solution is not required.
NASA Astrophysics Data System (ADS)
Hosen, Md. Alal; Chowdhury, M. S. H.; Ali, Mohammad Yeakub; Ismail, Ahmad Faris
In the present paper, a novel analytical approximation technique has been proposed based on the energy balance method (EBM) to obtain approximate periodic solutions for the focus generalized highly nonlinear oscillators. The expressions of the natural frequency-amplitude relationship are obtained using a novel analytical way. The accuracy of the proposed method is investigated on three benchmark oscillatory problems, namely, the simple relativistic oscillator, the stretched elastic wire oscillator (with a mass attached to its midpoint) and the Duffing-relativistic oscillator. For an initial oscillation amplitude A0 = 100, the maximal relative errors of natural frequency found in three oscillators are 2.1637%, 0.0001% and 1.201%, respectively, which are much lower than the errors found using the existing methods. It is highly remarkable that an excellent accuracy of the approximate natural frequency has been found which is valid for the whole range of large values of oscillation amplitude as compared with the exact ones. Very simple solution procedure and high accuracy that is found in three benchmark problems reveal the novelty, reliability and wider applicability of the proposed analytical approximation technique.
Bishop, R. F.; Li, P. H. Y.
2011-04-15
An approximation hierarchy, called the lattice-path-based subsystem (LPSUBm) approximation scheme, is described for the coupled-cluster method (CCM). It is applicable to systems defined on a regular spatial lattice. We then apply it to two well-studied prototypical (spin-(1/2) Heisenberg antiferromagnetic) spin-lattice models, namely, the XXZ and the XY models on the square lattice in two dimensions. Results are obtained in each case for the ground-state energy, the ground-state sublattice magnetization, and the quantum critical point. They are all in good agreement with those from such alternative methods as spin-wave theory, series expansions, quantum Monte Carlo methods, and the CCM using the alternative lattice-animal-based subsystem (LSUBm) and the distance-based subsystem (DSUBm) schemes. Each of the three CCM schemes (LSUBm, DSUBm, and LPSUBm) for use with systems defined on a regular spatial lattice is shown to have its own advantages in particular applications.
An in vitro comparison of detection methods for approximal carious lesions in primary molars.
Chawla, N; Messer, L B; Adams, G G; Manton, D J
2012-01-01
This study aimed to compare and contrast in vitro six methods to determine the most accurate method for detecting approximal carious lesions in primary molars. Extracted primary molars (n = 140) were stored in 0.02% chlorhexidine solution and mounted in light-cured resin in pairs. The six carious lesion detection methods used by the three examiners to assess approximal carious lesions were visual inspection, digital radiography, two transillumination lights (SDI and NSK), and two laser fluorescence instruments (CDD and DDP). Five damaged teeth were discarded. The teeth (n = 135) were sectioned, serially ground, and examined under light microscopy using Downer's histological (HST) criteria as the gold standard. Intra- and inter-examiner reliability, agreement with HST, specificity, sensitivity, receiver operating characteristic (ROC) curves, and areas under the curve were calculated. This study found visual inspection to be the most accurate method when validated by histology. Transillumination with NSK light had the highest specificity, and digital radiography had the highest sensitivity for detecting enamel and/or dentinal carious lesions. Combining specificity and sensitivity into the area under ROC curves, enamel plus dentinal lesions were detected most accurately by visual inspection followed by digital radiography; dentinal lesions were detected most accurately by digital radiography followed by visual inspection. None of the four newly developed methods can be recommended as suitable replacements for visual inspection and digital radiography in detecting carious lesions on approximal surfaces of primary molars, and further developmental work is needed. Copyright © 2012 S. Karger AG, Basel.
Tuleau-Malot, Christine; Rouis, Amel; Grammont, Franck; Reynaud-Bouret, Patricia
2014-07-01
The unitary events (UE) method is one of the most popular and efficient methods used over the past decade to detect patterns of coincident joint spike activity among simultaneously recorded neurons. The detection of coincidences is usually based on binned coincidence count (Grün, 1996 ), which is known to be subject to loss in synchrony detection (Grün, Diesmann, Grammont, Riehle, & Aertsen, 1999 ). This defect has been corrected by the multiple shift coincidence count (Grün et al., 1999 ). The statistical properties of this count have not been further investigated until this work, the formula being more difficult to deal with than the original binned count. First, we propose a new notion of coincidence count, the delayed coincidence count, which is equal to the multiple shift coincidence count when discretized point processes are involved as models for the spike trains. Moreover, it generalizes this notion to nondiscretized point processes, allowing us to propose a new gaussian approximation of the count. Since unknown parameters are involved in the approximation, we perform a plug-in step, where unknown parameters are replaced by estimated ones, leading to a modification of the approximating distribution. Finally the method takes the multiplicity of the tests into account via a Benjamini and Hochberg approach (Benjamini & Hochberg, 1995 ), to guarantee a prescribed control of the false discovery rate. We compare our new method, MTGAUE (multiple tests based on a gaussian approximation of the unitary events) and the UE method proposed in Grün et al. ( 1999 ) over various simulations, showing that MTGAUE extends the validity of the previous method. In particular, MTGAUE is able to detect both profusion and lack of coincidences with respect to the independence case and is robust to changes in the underlying model. Furthermore MTGAUE is applied on real data.
Evaluation of approximate methods for the prediction of noise shielding by airframe components
NASA Technical Reports Server (NTRS)
Ahtye, W. F.; Mcculley, G.
1980-01-01
An evaluation of some approximate methods for the prediction of shielding of monochromatic sound and broadband noise by aircraft components is reported. Anechoic-chamber measurements of the shielding of a point source by various simple geometric shapes were made and the measured values compared with those calculated by the superposition of asymptotic closed-form solutions for the shielding by a semi-infinite plane barrier. The shields used in the measurements consisted of rectangular plates, a circular cylinder, and a rectangular plate attached to the cylinder to simulate a wing-body combination. The normalized frequency, defined as a product of the acoustic wave number and either the plate width or cylinder diameter, ranged from 4.6 to 114. Microphone traverses in front of the rectangular plates and cylinders generally showed a series of diffraction bands that matched those predicted by the approximate methods, except for differences in the magnitudes of the attenuation minima which can be attributed to experimental inaccuracies. The shielding of wing-body combinations was predicted by modifications of the approximations used for rectangular and cylindrical shielding. Although the approximations failed to predict diffraction patterns in certain regions, they did predict the average level of wing-body shielding with an average deviation of less than 3 dB.
NASA Astrophysics Data System (ADS)
Kolesnikov, V. I.; Yakovlev, V. B.; Bardushkin, V. V.; Lavrov, I. V.; Sychev, A. P.; Yakovleva, E. N.
2013-09-01
Various methods for evaluation of the effective permittivity of heterogeneous media, namely, the effective medium approximation (Bruggeman's approximation), the Maxwell-Garnett approximation, Wiener's bounds, and the Hashin-Shtrikman variational bounds (for effective static characteristics) are combined on the basis of a generalized singular approximation.
Wei, Yunxia; Chen, Yanping; Shi, Xiulian; Zhang, Yuanyuan
2016-01-01
We present in this paper the convergence properties of Jacobi spectral collocation method when used to approximate the solution of multidimensional nonlinear Volterra integral equation. The solution is sufficiently smooth while the source function and the kernel function are smooth. We choose the Jacobi-Gauss points associated with the multidimensional Jacobi weight function [Formula: see text] (d denotes the space dimensions) as the collocation points. The error analysis in [Formula: see text]-norm and [Formula: see text]-norm theoretically justifies the exponential convergence of spectral collocation method in multidimensional space. We give two numerical examples in order to illustrate the validity of the proposed Jacobi spectral collocation method.
An approximate method for calculating three-dimensional inviscid hypersonic flow fields
NASA Technical Reports Server (NTRS)
Riley, Christopher J.; Dejarnette, Fred R.
1990-01-01
An approximate solution technique was developed for 3-D inviscid, hypersonic flows. The method employs Maslen's explicit pressure equation in addition to the assumption of approximate stream surfaces in the shock layer. This approximation represents a simplification to Maslen's asymmetric method. The present method presents a tractable procedure for computing the inviscid flow over 3-D surfaces at angle of attack. The solution procedure involves iteratively changing the shock shape in the subsonic-transonic region until the correct body shape is obtained. Beyond this region, the shock surface is determined using a marching procedure. Results are presented for a spherically blunted cone, paraboloid, and elliptic cone at angle of attack. The calculated surface pressures are compared with experimental data and finite difference solutions of the Euler equations. Shock shapes and profiles of pressure are also examined. Comparisons indicate the method adequately predicts shock layer properties on blunt bodies in hypersonic flow. The speed of the calculations makes the procedure attractive for engineering design applications.
Update-based evolution control: A new fitness approximation method for evolutionary algorithms
NASA Astrophysics Data System (ADS)
Ma, Haiping; Fei, Minrui; Simon, Dan; Mo, Hongwei
2015-09-01
Evolutionary algorithms are robust optimization methods that have been used in many engineering applications. However, real-world fitness evaluations can be computationally expensive, so it may be necessary to estimate the fitness with an approximate model. This article reviews design and analysis of computer experiments (DACE) as an approximation method that combines a global polynomial with a local Gaussian model to estimate continuous fitness functions. The article incorporates DACE in various evolutionary algorithms, to test unconstrained and constrained benchmarks, both with and without fitness function evaluation noise. The article also introduces a new evolution control strategy called update-based control that estimates the fitness of certain individuals of each generation based on the exact fitness values of other individuals during that same generation. The results show that update-based evolution control outperforms other strategies on noise-free, noisy, constrained and unconstrained benchmarks. The results also show that update-based evolution control can compensate for fitness evaluation noise.
Singh, P.P.; Gonis, A. )
1993-03-15
We describe the generalized perturbation method in the atomic-sphere approximation (ASA) for calculating the effective cluster interactions. Based on our development of Korringa-Kohn-Rostoker coherent-potential approximation in the ASA [Singh [ital et] [ital al]., Phys. Rev. B 44, 8578 (1991)], the present approach is the next step towards developing a first-principles method that can be easily applied to describe substitutionally disordered alloys based on simple lattice structures as well as complex lattice structures with low symmetry. To test the accuracy of the ASA results, we have calculated the effective pair interactions (EPI) up to fourth-nearest neighbors for the substitutionally disordered Pd[sub 0.5]V[sub 0.5] and Pd[sub 0.75]Rh[sub 0.25] alloys. Our calculated EPI's are in good agreement with the respective muffin-tin results.
Gledhill, Jonathan D; Peach, Michael J G; Tozer, David J
2013-10-08
A range of tuning methods, for enforcing approximate energy linearity through a system-by-system optimization of a range-separated hybrid functional, are assessed. For a series of atoms, the accuracy of the frontier orbital energies, ionization potentials, electron affinities, and orbital energy gaps is quantified, and particular attention is paid to the extent to which approximate energy linearity is actually achieved. The tuning methods can yield significantly improved orbital energies and orbital energy gaps, compared to those from conventional functionals. For systems with integer M electrons, optimal results are obtained using a tuning norm based on the highest occupied orbital energy of the M and M + 1 electron systems, with deviations of just 0.1-0.2 eV in these quantities, compared to exact values. However, detailed examination for the carbon atom illustrates a subtle cancellation between errors arising from nonlinearity and errors in the computed ionization potentials and electron affinities used in the tuning.
A fourth-order Runge-Kutta method based on BDF-type Chebyshev approximations
NASA Astrophysics Data System (ADS)
Ramos, Higinio; Vigo-Aguiar, Jesus
2007-07-01
In this paper we consider a new fourth-order method of BDF-type for solving stiff initial-value problems, based on the interval approximation of the true solution by truncated Chebyshev series. It is shown that the method may be formulated in an equivalent way as a Runge-Kutta method having stage order four. The method thus obtained have good properties relatives to stability including an unbounded stability domain and large [alpha]-value concerning A([alpha])-stability. A strategy for changing the step size, based on a pair of methods in a similar way to the embedding pair in the Runge-Kutta schemes, is presented. The numerical examples reveals that this method is very promising when it is used for solving stiff initial-value problems.
2005-03-01
synthetic aperature radar and radar detec- tion using both software modelling and mathematical analysis and techniques. vi DSTO–TR–1692 Contents 1...joined DSTO in 1990, where he has been part of research efforts in the areas of target radar cross section, digital signal processing, inverse ...Approximation of Integrals via Monte Carlo Methods, with an Application to Calculating Radar Detection Probabilities Graham V. Weinberg and Ross
Approximation of acoustic waves by explicit Newmark's schemes and spectral element methods
NASA Astrophysics Data System (ADS)
Zampieri, Elena; Pavarino, Luca F.
2006-01-01
A numerical approximation of the acoustic wave equation is presented. The spatial discretization is based on conforming spectral elements, whereas we use finite difference Newmark's explicit integration schemes for the temporal discretization. A rigorous stability analysis is developed for the discretized problem providing an upper bound for the time step [Delta]t. We present several numerical results concerning stability and convergence properties of the proposed numerical methods.
A method for the accurate and smooth approximation of standard thermodynamic functions
NASA Astrophysics Data System (ADS)
Coufal, O.
2013-01-01
A method is proposed for the calculation of approximations of standard thermodynamic functions. The method is consistent with the physical properties of standard thermodynamic functions. This means that the approximation functions are, in contrast to the hitherto used approximations, continuous and smooth in every temperature interval in which no phase transformations take place. The calculation algorithm was implemented by the SmoothSTF program in the C++ language which is part of this paper. Program summaryProgram title:SmoothSTF Catalogue identifier: AENH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3807 No. of bytes in distributed program, including test data, etc.: 131965 Distribution format: tar.gz Programming language: C++. Computer: Any computer with gcc version 4.3.2 compiler. Operating system: Debian GNU Linux 6.0. The program can be run in operating systems in which the gcc compiler can be installed, see http://gcc.gnu.org/install/specific.html. RAM: 256 MB are sufficient for the table of standard thermodynamic functions with 500 lines Classification: 4.9. Nature of problem: Standard thermodynamic functions (STF) of individual substances are given by thermal capacity at constant pressure, entropy and enthalpy. STF are continuous and smooth in every temperature interval in which no phase transformations take place. The temperature dependence of STF as expressed by the table of its values is for further application approximated by temperature functions. In the paper, a method is proposed for calculating approximation functions which, in contrast to the hitherto used approximations, are continuous and smooth in every temperature interval. Solution method: The approximation functions are
In vitro performance of methods of approximal caries detection in primary molars.
Braga, Mariana Minatel; Morais, Caroline Carvalho; Nakama, Renata Cristina Satiko; Leamari, Victor Moreira; Siqueira, Walter Luiz; Mendes, Fausto Medeiros
2009-10-01
The aim was to compare the performance of different methods in detecting approximal caries lesions primary molars ex vivo. One hundred thirty-one approximal surfaces were examined by 2 observers with visual inspection (VI) using the International Caries Detection and Assessment System, radiographic interpretation, and clinically using the Diagnodent pen (LFpen). To achieve a reference standard, surfaces were directly examined for the presence of white spots or cavitations, and lesion depth was determined after sectioning. The area under the receiver operating characteristic curve (A(z)), sensitivity, specificity, and accuracy were calculated, as well as the interexaminer reproducibility. Using the cavitation threshold, all methods presented similar sensitivities. Higher A(z) values were achieved with VI at white spot threshold, and VI and LFpen had higher A(z) values at cavitation threshold. VI presented higher accuracy and A(z) than radiographic and LFpen at both enamel and dentin depth thresholds. Higher reliability values were achieved with VI. VI performs better, but both radiographic and LFpen methods also show good performance in detecting more advanced approximal caries lesions.
Low rank approximation methods for MR fingerprinting with large scale dictionaries.
Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra
2017-08-13
This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T1 , T2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 000:000-000, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Krochik, G. M.
1980-02-01
Stimulated Raman scattering of a randomly modulated pump is investigated by the method of successive approximations. This involves expanding solutions in terms of small parameters, which are ratios of the correlation scales of random effects to other characteristic dynamic scales of the problem. Systems of closed equations are obtained for the moments of the amplitudes of the Stokes and pump waves and of the molecular vibrations. These describe the dynamics of the process allowing for changes in the pump intensity and statistics due to a three-wave interaction. By analyzing equations in higher-order approximations, it is possible to establish the conditions of validity of the first (Markov) and second approximations. In particular, it is found that these are valid for pump intensities JL both above and below the critical value Jcr near which the gain begins to increase rapidly and reproduction of the pump spectrum by the Stokes wave is initiated. Solutions are obtained for average intensities of the Stokes wave and molecular vibrations in the first approximation in a constant pump field. It is established that, for JLgtrsimJcr, the Stokes wave undergoes rapid nonsteady-state amplification which is associated with an increase in the amplitude of the molecular vibrations. The results of the calculations show good agreement with known experimental data.
Physically weighted approximations of unsteady aerodynamic forces using the minimum-state method
NASA Technical Reports Server (NTRS)
Karpel, Mordechay; Hoadley, Sherwood Tiffany
1991-01-01
The Minimum-State Method for rational approximation of unsteady aerodynamic force coefficient matrices, modified to allow physical weighting of the tabulated aerodynamic data, is presented. The approximation formula and the associated time-domain, state-space, open-loop equations of motion are given, and the numerical procedure for calculating the approximation matrices, with weighted data and with various equality constraints are described. Two data weighting options are presented. The first weighting is for normalizing the aerodynamic data to maximum unit value of each aerodynamic coefficient. The second weighting is one in which each tabulated coefficient, at each reduced frequency value, is weighted according to the effect of an incremental error of this coefficient on aeroelastic characteristics of the system. This weighting yields a better fit of the more important terms, at the expense of less important ones. The resulting approximate yields a relatively low number of aerodynamic lag states in the subsequent state-space model. The formulation forms the basis of the MIST computer program which is written in FORTRAN for use on the MicroVAX computer and interfaces with NASA's Interaction of Structures, Aerodynamics and Controls (ISAC) computer program. The program structure, capabilities and interfaces are outlined in the appendices, and a numerical example which utilizes Rockwell's Active Flexible Wing (AFW) model is given and discussed.
Nikiforov, Alexander; Gamez, Jose A.; Thiel, Walter; Huix-Rotllant, Miquel; Filatov, Michael
2014-09-28
Quantum-chemical computational methods are benchmarked for their ability to describe conical intersections in a series of organic molecules and models of biological chromophores. Reference results for the geometries, relative energies, and branching planes of conical intersections are obtained using ab initio multireference configuration interaction with single and double excitations (MRCISD). They are compared with the results from more approximate methods, namely, the state-interaction state-averaged restricted ensemble-referenced Kohn-Sham method, spin-flip time-dependent density functional theory, and a semiempirical MRCISD approach using an orthogonalization-corrected model. It is demonstrated that these approximate methods reproduce the ab initio reference data very well, with root-mean-square deviations in the optimized geometries of the order of 0.1 Å or less and with reasonable agreement in the computed relative energies. A detailed analysis of the branching plane vectors shows that all currently applied methods yield similar nuclear displacements for escaping the strong non-adiabatic coupling region near the conical intersections. Our comparisons support the use of the tested quantum-chemical methods for modeling the photochemistry of large organic and biological systems.
NASA Astrophysics Data System (ADS)
Nikiforov, Alexander; Gamez, Jose A.; Thiel, Walter; Huix-Rotllant, Miquel; Filatov, Michael
2014-09-01
Quantum-chemical computational methods are benchmarked for their ability to describe conical intersections in a series of organic molecules and models of biological chromophores. Reference results for the geometries, relative energies, and branching planes of conical intersections are obtained using ab initio multireference configuration interaction with single and double excitations (MRCISD). They are compared with the results from more approximate methods, namely, the state-interaction state-averaged restricted ensemble-referenced Kohn-Sham method, spin-flip time-dependent density functional theory, and a semiempirical MRCISD approach using an orthogonalization-corrected model. It is demonstrated that these approximate methods reproduce the ab initio reference data very well, with root-mean-square deviations in the optimized geometries of the order of 0.1 Å or less and with reasonable agreement in the computed relative energies. A detailed analysis of the branching plane vectors shows that all currently applied methods yield similar nuclear displacements for escaping the strong non-adiabatic coupling region near the conical intersections. Our comparisons support the use of the tested quantum-chemical methods for modeling the photochemistry of large organic and biological systems.
NASA Technical Reports Server (NTRS)
Murphy, P. C.
1984-01-01
An algorithm for maximum likelihood (ML) estimation is developed primarily for multivariable dynamic systems. The algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). The method determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort compared with integrating the analytically determined sensitivity equations or using a finite-difference method. Different surface-fitting methods are discussed and demonstrated. Aircraft estimation problems are solved by using both simulated and real-flight data to compare MNRES with commonly used methods; in these solutions MNRES is found to be equally accurate and substantially faster. MNRES eliminates the need to derive sensitivity equations, thus producing a more generally applicable algorithm.
S-curve networks and an approximate method for estimating degree distributions of complex networks
NASA Astrophysics Data System (ADS)
Guo, Jin-Li
2010-12-01
In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.
A goal-oriented adaptive procedure for the quasi-continuum method with cluster approximation
NASA Astrophysics Data System (ADS)
Memarnahavandi, Arash; Larsson, Fredrik; Runesson, Kenneth
2015-04-01
We present a strategy for adaptive error control for the quasi-continuum (QC) method applied to molecular statics problems. The QC-method is introduced in two steps: Firstly, introducing QC-interpolation while accounting for the exact summation of all the bond-energies, we compute goal-oriented error estimators in a straight-forward fashion based on the pertinent adjoint (dual) problem. Secondly, for large QC-elements the bond energy and its derivatives are typically computed using an appropriate discrete quadrature using cluster approximations, which introduces a model error. The combined error is estimated approximately based on the same dual problem in conjunction with a hierarchical strategy for approximating the residual. As a model problem, we carry out atomistic-to-continuum homogenization of a graphene monolayer, where the Carbon-Carbon energy bonds are modeled via the Tersoff-Brenner potential, which involves next-nearest neighbor couplings. In particular, we are interested in computing the representative response for an imperfect lattice. Within the goal-oriented framework it becomes natural to choose the macro-scale (continuum) stress as the "quantity of interest". Two different formulations are adopted: The Basic formulation and the Global formulation. The presented numerical investigation shows the accuracy and robustness of the proposed error estimator and the pertinent adaptive algorithm.
NASA Astrophysics Data System (ADS)
Lehikoinen, A.; Huttunen, J. M.; Finsterle, S.; Kowalsky, M. B.; Kaipio, J. P.
2007-05-01
We extend the previously presented methodology for imaging the evolution of electrically conductive fluids in porous media. In that method, the nonstationary inversion problem was solved using Bayesian filtering. The method was demonstrated using a synthetically generated test case where the monitored target is a time-varying water plume in an unsaturated porous medium, and the imaging modality was electrical resistance tomography (ERT). The inverse problem was formulated as a state estimation problem, which is based on observation- evolution models. As an observation model for ERT, the complete electrode model was used, and for time- varying unsaturated flow, the Richards equation was used as an evolution model. Although the "true" evolution of water flow was simulated using a heterogeneous permeability field, in the inversion step the permeability was assumed to be homogeneous. This assumption leads to approximation errors that have been taken into account by constructing a statistical model between the different realizations of the accurate and the approximate fluid flow models. This statistical model was constructed using an ensemble of samples from the evolution model in a way that the construction can be carried out prior to taking observations. However, the statistics of approximation errors actually depends on observations (through the state). In this work we extend the previously presented method so that the statistics of the approximation error are adjusted based on the observations. The basic idea of the extension is to gather those samples from the ensemble which at the current time best represents the observed state. We then determine the statistics of the approximation error based on these collated samples. The extension of the methodology provides improved estimates of water saturation distributions compared to the previously presented approaches. The proposed methodology may be extended for imaging and estimating parameters of dynamical processes
NASA Astrophysics Data System (ADS)
Gambacurta, D.; Grasso, M.; Engel, J.
2015-09-01
We make use of a subtraction procedure, introduced to overcome double-counting problems in beyond-mean-field theories, in the second random-phase-approximation (SRPA) for the first time. This procedure guarantees the stability of the SRPA (so that all excitation energies are real). We show that the method fits perfectly into nuclear density-functional theory. We illustrate applications to the monopole and quadrupole response and to low-lying 0+ and 2+ states in the nucleus 16O . We show that the subtraction procedure leads to (i) results that are weakly cutoff dependent and (ii) a considerable reduction of the SRPA downwards shift with respect to the random-phase approximation (RPA) spectra (systematically found in all previous applications). This implementation of the SRPA model will allow a reliable analysis of the effects of two particle-two hole configurations (2p2h) on the excitation spectra of medium-mass and heavy nuclei.
NASA Astrophysics Data System (ADS)
Enting, I. G.
2017-04-01
Several decades of parallel developments in the calculation and analysis of series expansions for lattice statistics have led to many new insights into critical phenomena. These studies have centered on the use of the finite lattice method for series expansions in lattice statistics and the use of differential approximants in analysing such series. One of these strands of research ultimately led to the result that a number of unsolved lattice statistics problems cannot be expressed as D-finite functions. Somewhat ironically, given power and success of differential approximants in analysing series, neither the assumed functional form, nor any finite generalisation thereof can fit such cases exactly. In honour of the 70th birthday for Professor A J Guttmann
Domain decomposition methods for systems of conservation laws: Spectral collocation approximations
NASA Technical Reports Server (NTRS)
Quarteroni, Alfio
1989-01-01
Hyperbolic systems of conversation laws are considered which are discretized in space by spectral collocation methods and advanced in time by finite difference schemes. At any time-level a domain deposition method based on an iteration by subdomain procedure was introduced yielding at each step a sequence of independent subproblems (one for each subdomain) that can be solved simultaneously. The method is set for a general nonlinear problem in several space variables. The convergence analysis, however, is carried out only for a linear one-dimensional system with continuous solutions. A precise form of the error reduction factor at each iteration is derived. Although the method is applied here to the case of spectral collocation approximation only, the idea is fairly general and can be used in a different context as well. For instance, its application to space discretization by finite differences is straight forward.
Approximate method for calculating heating rates on three-dimensional vehicles
NASA Astrophysics Data System (ADS)
Hamilton, H. Harris; Greene, Francis A.; Dejarnette, F. R.
1994-05-01
An approximate method for calculating heating rates on three-dimensional vehicles at angle of attack is presented. The method is based on the axisymmetric analog for three-dimensional boundary layers and uses a generalized body-fitted coordinate system. Edge conditions for the boundary-layer solution are obtained from an inviscid flowfield solution, and because of the coordinate system used, the method is applicable to any blunt body geometry for which an inviscid flowfield solution can be obtained. The method is validated by comparing with experimental heating data and with thin-layer Navier-Stokes calculations on the shuttle orbiter at both wind-tunnel and flight conditions and with thin-layer Navier-Stokes calculations on the HL-20 at wind-tunnel conditions.
Approximate method for calculating heating rates on three-dimensional vehicles
NASA Technical Reports Server (NTRS)
Hamilton, H. Harris; Greene, Francis A.; Dejarnette, F. R.
1994-01-01
An approximate method for calculating heating rates on three-dimensional vehicles at angle of attack is presented. The method is based on the axisymmetric analog for three-dimensional boundary layers and uses a generalized body-fitted coordinate system. Edge conditions for the boundary-layer solution are obtained from an inviscid flowfield solution, and because of the coordinate system used, the method is applicable to any blunt body geometry for which an inviscid flowfield solution can be obtained. The method is validated by comparing with experimental heating data and with thin-layer Navier-Stokes calculations on the shuttle orbiter at both wind-tunnel and flight conditions and with thin-layer Navier-Stokes calculations on the HL-20 at wind-tunnel conditions.
Approximate method for calculating heating rates on three-dimensional vehicles
NASA Technical Reports Server (NTRS)
Hamilton, H. Harris; Greene, Francis A.; Dejarnette, F. R.
1994-01-01
An approximate method for calculating heating rates on three-dimensional vehicles at angle of attack is presented. The method is based on the axisymmetric analog for three-dimensional boundary layers and uses a generalized body-fitted coordinate system. Edge conditions for the boundary-layer solution are obtained from an inviscid flowfield solution, and because of the coordinate system used, the method is applicable to any blunt body geometry for which an inviscid flowfield solution can be obtained. The method is validated by comparing with experimental heating data and with thin-layer Navier-Stokes calculations on the shuttle orbiter at both wind-tunnel and flight conditions and with thin-layer Navier-Stokes calculations on the HL-20 at wind-tunnel conditions.
Chemical physics without the Born-Oppenheimer approximation: The molecular coupled-cluster method
NASA Astrophysics Data System (ADS)
Monkhorst, Hendrik J.
1987-08-01
The Born-Oppenheimer (BO) and Born-Huang (BH) treatments of molecular eigenstates are reexamined. It is argued that in application of the BO approximation to nonrigid molecules and chemical dynamics involving single potential-energy surfaces (PES's), errors on the order of tens of percents can easily occur in many computed properties. Introduction of a BH expansion (in BO states) will always lead to poor convergence when the BO approximation fails; its diagonal (or adiabatic) approximation will not change this situation. The main problem in the above applications is the absence of well-developed, well-separated minima in the PES (or no minima at all). Inspired by a non-BO view of a molecule by Essén [Int. J. Quantum Chem. 12, 721 (1977)], a molecular coupled-cluster (MCC) method is formulated. An Essén molecule consists of neutral subunits (``atoms''), weakly interacting (``bonds'') in some spatial arrangement (``structure''). The quasiseparation in collective and individual motions within the molecule comes about by virtue of the virial theorem, not the smallness of the electron-to-nuclear mass ratio. The MCC method not only should converge well in the cluster sizes, but it also is capable of describing electronic shell and molecular geometric structures. It can be viewed as the workable formalism for Essén's physical picture of a molecule. The time-independent and time-dependent versions are described. The latter one is useful for scattering, chemical dynamics, laser chemistry, half-collisions, and any other phenomena that can be described as the time evolution of many-particle wave packets. Close relationship to time-dependent Hartree-Fock theory exists. A few implementational aspects are discussed, such as symmetry, conservation laws, approximations, numerical techniques, as well as a possible relation with a non-BO PES. Appendixes contain mathematical details.
Approximate-model based estimation method for dynamic response of forging processes
NASA Astrophysics Data System (ADS)
Lei, Jie; Lu, Xinjiang; Li, Yibo; Huang, Minghui; Zou, Wei
2015-03-01
Many high-quality forging productions require the large-sized hydraulic press machine (HPM) to have a desirable dynamic response. Since the forging process is complex under the low velocity, its response is difficult to estimate. And this often causes the desirable low-velocity forging condition difficult to obtain. So far little work has been found to estimate the dynamic response of the forging process under low velocity. In this paper, an approximate-model based estimation method is proposed to estimate the dynamic response of the forging process under low velocity. First, an approximate model is developed to represent the forging process of this complex HPM around the low-velocity working point. Under guaranteeing the modeling performance, the model may greatly ease the complexity of the subsequent estimation of the dynamic response because it has a good linear structure. On this basis, the dynamic response is estimated and the conditions for stability, vibration, and creep are derived according to the solution of the velocity. All these analytical results are further verified by both simulations and experiment. In the simulation verification for modeling, the original movement model and the derived approximate model always have the same dynamic responses with very small approximate error. The simulations and experiment finally demonstrate and test the effectiveness of the derived conditions for stability, vibration, and creep, and these conditions will benefit both the prediction of the dynamic response of the forging process and the design of the controller for the high-quality forging. The proposed method is an effective solution to achieve the desirable low-velocity forging condition.
Wu, Fuke; Tian, Tianhai; Rawlings, James B; Yin, George
2016-05-07
The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in the work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766-1793 (1996); ibid. 56, 1794-1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence.
NASA Astrophysics Data System (ADS)
Wu, Fuke; Tian, Tianhai; Rawlings, James B.; Yin, George
2016-05-01
The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in the work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766-1793 (1996); ibid. 56, 1794-1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence.
NASA Technical Reports Server (NTRS)
Monchick, L.; Green, S.
1977-01-01
Two dimensionality-reducing approximations, the j sub z-conserving coupled states (sometimes called the centrifugal decoupling) method and the effective potential method, were applied to collision calculations of He with CO and with HCl. The coupled states method was found to be sensitive to the interpretation of the centrifugal angular momentum quantum number in the body-fixed frame, but the choice leading to the original McGuire-Kouri expression for the scattering amplitude - and to the simplest formulas - proved to be quite successful in reproducing differential and gas kinetic cross sections. The computationally cheaper effective potential method was much less accurate.
NASA Technical Reports Server (NTRS)
Karpel, M.
1994-01-01
Various control analysis, design, and simulation techniques of aeroservoelastic systems require the equations of motion to be cast in a linear, time-invariant state-space form. In order to account for unsteady aerodynamics, rational function approximations must be obtained to represent them in the first order equations of the state-space formulation. A computer program, MIST, has been developed which determines minimum-state approximations of the coefficient matrices of the unsteady aerodynamic forces. The Minimum-State Method facilitates the design of lower-order control systems, analysis of control system performance, and near real-time simulation of aeroservoelastic phenomena such as the outboard-wing acceleration response to gust velocity. Engineers using this program will be able to calculate minimum-state rational approximations of the generalized unsteady aerodynamic forces. Using the Minimum-State formulation of the state-space equations, they will be able to obtain state-space models with good open-loop characteristics while reducing the number of aerodynamic equations by an order of magnitude more than traditional approaches. These low-order state-space mathematical models are good for design and simulation of aeroservoelastic systems. The computer program, MIST, accepts tabular values of the generalized aerodynamic forces over a set of reduced frequencies. It then determines approximations to these tabular data in the LaPlace domain using rational functions. MIST provides the capability to select the denominator coefficients in the rational approximations, to selectably constrain the approximations without increasing the problem size, and to determine and emphasize critical frequency ranges in determining the approximations. MIST has been written to allow two types data weighting options. The first weighting is a traditional normalization of the aerodynamic data to the maximum unit value of each aerodynamic coefficient. The second allows weighting the
Shu, Yu-Chen; Chern, I-Liang; Chang, Chien C.
2014-10-15
Most elliptic interface solvers become complicated for complex interface problems at those “exceptional points” where there are not enough neighboring interior points for high order interpolation. Such complication increases especially in three dimensions. Usually, the solvers are thus reduced to low order accuracy. In this paper, we classify these exceptional points and propose two recipes to maintain order of accuracy there, aiming at improving the previous coupling interface method [26]. Yet the idea is also applicable to other interface solvers. The main idea is to have at least first order approximations for second order derivatives at those exceptional points. Recipe 1 is to use the finite difference approximation for the second order derivatives at a nearby interior grid point, whenever this is possible. Recipe 2 is to flip domain signatures and introduce a ghost state so that a second-order method can be applied. This ghost state is a smooth extension of the solution at the exceptional point from the other side of the interface. The original state is recovered by a post-processing using nearby states and jump conditions. The choice of recipes is determined by a classification scheme of the exceptional points. The method renders the solution and its gradient uniformly second-order accurate in the entire computed domain. Numerical examples are provided to illustrate the second order accuracy of the presently proposed method in approximating the gradients of the original states for some complex interfaces which we had tested previous in two and three dimensions, and a real molecule ( (1D63)) which is double-helix shape and composed of hundreds of atoms.
NASA Astrophysics Data System (ADS)
Liu, Jie; Sun, Xingsheng; Han, Xu; Jiang, Chao; Yu, Dejie
2015-05-01
Based on the Gegenbauer polynomial expansion theory and regularization method, an analytical method is proposed to identify dynamic loads acting on stochastic structures. Dynamic loads are expressed as functions of time and random parameters in time domain and the forward model of dynamic load identification is established through the discretized convolution integral of loads and the corresponding unit-pulse response functions of system. Random parameters are approximated through the random variables with λ-probability density function (PDFs) or their derivative PDFs. For this kind of random variables, Gegenbauer polynomial expansion is the unique correct choice to transform the problem of load identification for a stochastic structure into its equivalent deterministic system. Just via its equivalent deterministic system, the load identification problem of a stochastic structure can be solved by any available deterministic methods. With measured responses containing noise, the improved regularization operator is adopted to overcome the ill-posedness of load reconstruction and to obtain the stable and approximate solutions of certain inverse problems and the valid assessments of the statistics of identified loads. Numerical simulations demonstrate that with regard to stochastic structures, the identification and assessment of dynamic loads are achieved steadily and effectively by the presented method.
James, Kevin R; Dowling, David R
2008-09-01
In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.
All-electron self-consistent GW approximation based on full-potential LMTO method
NASA Astrophysics Data System (ADS)
Faleev, Sergey; van Schilfgaarde, Mark; Kotani, Takao
2003-03-01
We present a new all-electron self-consistent implementation of the GW approximation based on full-potential LMTO method. The dynamically screened Coloumb interaction W is expended in a mixed basis which consist of two contributions, local atom-centered functions confined to muffin-tin spheres, and plane waves with the overlap to the local functions projected out. The former can include any of the core states: thus the core and valence states can be treated on an equal footing. The self-consistency is achieved by following iteration cycle: using eigenfunctions of the LDA Hamiltonian with an added self-energy term the next-iteration self-energy is calculated in GW approximation. The non-local and energy dependent self-energy term is then added to the LDA Hamiltonian and next iteration wave-functions and energies are obtained by diagonalization. The CPU time of otherwise numerically prohibited SC GW simulations has been reduced by an order of magnitude utilizing the dispersion relations for the polarization operator. The results obtained for band gaps of Si and MnO are in good agreement with the experimental values, noticeably better then results obtained in the non self-consistent GW and LDA approximations.
Efficient time-sampling method in Coulomb-corrected strong-field approximation.
Xiao, Xiang-Ru; Wang, Mu-Xue; Xiong, Wei-Hao; Peng, Liang-You
2016-11-01
One of the main goals of strong-field physics is to understand the complex structures formed in the momentum plane of the photoelectron. For this purpose, different semiclassical methods have been developed to seek an intuitive picture of the underlying mechanism. The most popular ones are the quantum trajectory Monte Carlo (QTMC) method and the Coulomb-corrected strong-field approximation (CCSFA), both of which take the classical action into consideration and can describe the interference effect. The CCSFA is more widely applicable in a large range of laser parameters due to its nonadiabatic nature in treating the initial tunneling dynamics. However, the CCSFA is much more time consuming than the QTMC method because of the numerical solution to the saddle-point equations. In the present work, we present a time-sampling method to overcome this disadvantage. Our method is as efficient as the fast QTMC method and as accurate as the original treatment in CCSFA. The performance of our method is verified by comparing the results of these methods with that of the exact solution to the time-dependent Schrödinger equation.
Efficient time-sampling method in Coulomb-corrected strong-field approximation
NASA Astrophysics Data System (ADS)
Xiao, Xiang-Ru; Wang, Mu-Xue; Xiong, Wei-Hao; Peng, Liang-You
2016-11-01
One of the main goals of strong-field physics is to understand the complex structures formed in the momentum plane of the photoelectron. For this purpose, different semiclassical methods have been developed to seek an intuitive picture of the underlying mechanism. The most popular ones are the quantum trajectory Monte Carlo (QTMC) method and the Coulomb-corrected strong-field approximation (CCSFA), both of which take the classical action into consideration and can describe the interference effect. The CCSFA is more widely applicable in a large range of laser parameters due to its nonadiabatic nature in treating the initial tunneling dynamics. However, the CCSFA is much more time consuming than the QTMC method because of the numerical solution to the saddle-point equations. In the present work, we present a time-sampling method to overcome this disadvantage. Our method is as efficient as the fast QTMC method and as accurate as the original treatment in CCSFA. The performance of our method is verified by comparing the results of these methods with that of the exact solution to the time-dependent Schrödinger equation.
Simplified method for including spatial correlations in mean-field approximations
NASA Astrophysics Data System (ADS)
Markham, Deborah C.; Simpson, Matthew J.; Baker, Ruth E.
2013-06-01
Biological systems involving proliferation, migration, and death are observed across all scales. For example, they govern cellular processes such as wound healing, as well as the population dynamics of groups of organisms. In this paper, we provide a simplified method for correcting mean-field approximations of volume-excluding birth-death-movement processes on a regular lattice. An initially uniform distribution of agents on the lattice may give rise to spatial heterogeneity, depending on the relative rates of proliferation, migration, and death. Many frameworks chosen to model these systems neglect spatial correlations, which can lead to inaccurate predictions of their behavior. For example, the logistic model is frequently chosen, which is the mean-field approximation in this case. This mean-field description can be corrected by including a system of ordinary differential equations for pairwise correlations between lattice site occupancies at various lattice distances. In this work we discuss difficulties with this method and provide a simplification in the form of a partial differential equation description for the evolution of pairwise spatial correlations over time. We test our simplified model against the more complex corrected mean-field model, finding excellent agreement. We show how our model successfully predicts system behavior in regions where the mean-field approximation shows large discrepancies. Additionally, we investigate regions of parameter space where migration is reduced relative to proliferation, which has not been examined in detail before and find our method is successful at correcting the deviations observed in the mean-field model in these parameter regimes.
Photovoltaic Generation Data Cleaning Method Based on Approximately Periodic Time Series
NASA Astrophysics Data System (ADS)
Zhang, J.; Zhang, Sh; Liang, J.; Tian, B.; Hou, Z.; Liu, B. Zh
2017-05-01
Data cleaning of photovoltaic (PV) power generation is an important step during data preprocessing for further utilization, such as PV power generation forecasting. The PV power generation data can be treated as a time series. An improved data cleaning method based on approximately periodic time series is proposed. First, the abnormal data in the PV data time series is classified with three types of the outliers. Then these three types of outliers are quantified based on the physical characters of PV power generation, and the effective corresponding cleaning implementations are described considering the rate capacity of PV station and period of PV data time series. Finally, the data cleaning method is tested on the PV generation data from a certain real power grid. The results show that this data cleaning method can effectively improve the PV data quality, and provide an effective support tool for the further application of PV data.
NASA Astrophysics Data System (ADS)
Barreiro, Andrea K.; Ly, Cheng
2017-08-01
Rapid experimental advances now enable simultaneous electrophysiological recording of neural activity at single-cell resolution across large regions of the nervous system. Models of this neural network activity will necessarily increase in size and complexity, thus increasing the computational cost of simulating them and the challenge of analyzing them. Here we present a method to approximate the activity and firing statistics of a general firing rate network model (of the Wilson-Cowan type) subject to noisy correlated background inputs. The method requires solving a system of transcendental equations and is fast compared to Monte Carlo simulations of coupled stochastic differential equations. We implement the method with several examples of coupled neural networks and show that the results are quantitatively accurate even with moderate coupling strengths and an appreciable amount of heterogeneity in many parameters. This work should be useful for investigating how various neural attributes qualitatively affect the spiking statistics of coupled neural networks.
Sensitivity and Approximation of Coupled Fluid-Structure Equations by Virtual Control Method
Murea, Cornel Marius Vazquez, Carlos
2005-08-15
The formulation of a particular fluid-structure interaction as an optimal control problem is the departure point of this work. The control is the vertical component of the force acting on the interface and the observation is the vertical component of the velocity of the fluid on the interface. This approach permits us to solve the coupled fluid-structure problem by partitioned procedures. The analytic expression for the gradient of the cost function is obtained in order to devise accurate numerical methods for the minimization problem. Numerical results arising from blood flow in arteries are presented. To solve the optimal control problem numerically, we use a quasi-Newton method which employs the analytic gradient of the cost function and the approximation of the inverse Hessian is updated by the Broyden, Fletcher, Goldforb, Shano (BFGS) scheme. This algorithm is faster than fixed point with relaxation or block Newton methods.
An approximate method for calculating heating rates on three-dimensional vehicles
NASA Technical Reports Server (NTRS)
Hamilton, H. H., II; Greene, Francis A.; Dejarnette, Fred R.
1993-01-01
An approximate method for calculating heating rates on three-dimensional vehicles at angle of attack is presented. The method is based on the axisymmetric analog for three-dimensional boundary layers and uses a generalized body fitted coordinate system. Edge conditions for the boundary layer solution are obtained from an inviscid flowfield solution, and because of the coordinate system used the method is applicable to any blunt body geometry for which a inviscid flowfield solution can be obtained. It is validated by comparing with experimental heating data and with Navier-Stokes calculations on the Shuttle orbiter at both wind tunnel and flight conditions and with Navier-Stokes calculations on the HL-20 at wind tunnel conditions.
Car-Parrinello treatment for an approximate density-functional theory method
NASA Astrophysics Data System (ADS)
Rapacioli, Mathias; Barthel, Robert; Heine, Thomas; Seifert, Gotthard
2007-03-01
The authors formulate a Car-Parrinello treatment for the density-functional-based tight-binding method with and without self-consistent charge corrections. This method avoids the numerical solution of the secular equations, the principal drawback for large systems if the linear combination of atomic orbital ansatz is used. The formalism is applicable to finite systems and for supercells using periodic boundary conditions within the Γ-point approximation. They show that the methodology allows the application of modern computational techniques such as sparse matrix storage and massive parallelization in a straightforward way. All present bottlenecks concerning computer time and consumption of memory and memory bandwidth can be removed. They illustrate the performance of the method by direct comparison with Born-Oppenheimer molecular dynamics calculations. Water molecules, benzene, the C60 fullerene, and liquid water have been selected as benchmark systems.
NASA Astrophysics Data System (ADS)
Sweilam, N. H.; Abou Hasan, M. M.
2016-08-01
This paper reports a new spectral algorithm for obtaining an approximate solution for the Lévy-Feller diffusion equation depending on Legendre polynomials and Chebyshev collocation points. The Lévy-Feller diffusion equation is obtained from the standard diffusion equation by replacing the second-order space derivative with a Riesz-Feller derivative. A new formula expressing explicitly any fractional-order derivatives, in the sense of Riesz-Feller operator, of Legendre polynomials of any degree in terms of Jacobi polynomials is proved. Moreover, the Chebyshev-Legendre collocation method together with the implicit Euler method are used to reduce these types of differential equations to a system of algebraic equations which can be solved numerically. Numerical results with comparisons are given to confirm the reliability of the proposed method for the Lévy-Feller diffusion equation.
An approximate method for calculating heating rates on three-dimensional vehicles
NASA Technical Reports Server (NTRS)
Hamilton, H. H., II; Greene, Francis A.; Dejarnette, Fred R.
1993-01-01
An approximate method for calculating heating rates on three-dimensional vehicles at angle of attack is presented. The method is based on the axisymmetric analog for three-dimensional boundary layers and uses a generalized body fitted coordinate system. Edge conditions for the boundary layer solution are obtained from an inviscid flowfield solution, and because of the coordinate system used the method is applicable to any blunt body geometry for which a inviscid flowfield solution can be obtained. It is validated by comparing with experimental heating data and with Navier-Stokes calculations on the Shuttle orbiter at both wind tunnel and flight conditions and with Navier-Stokes calculations on the HL-20 at wind tunnel conditions.
Car-Parrinello treatment for an approximate density-functional theory method.
Rapacioli, Mathias; Barthel, Robert; Heine, Thomas; Seifert, Gotthard
2007-03-28
The authors formulate a Car-Parrinello treatment for the density-functional-based tight-binding method with and without self-consistent charge corrections. This method avoids the numerical solution of the secular equations, the principal drawback for large systems if the linear combination of atomic orbital ansatz is used. The formalism is applicable to finite systems and for supercells using periodic boundary conditions within the Gamma-point approximation. They show that the methodology allows the application of modern computational techniques such as sparse matrix storage and massive parallelization in a straightforward way. All present bottlenecks concerning computer time and consumption of memory and memory bandwidth can be removed. They illustrate the performance of the method by direct comparison with Born-Oppenheimer molecular dynamics calculations. Water molecules, benzene, the C(60) fullerene, and liquid water have been selected as benchmark systems.
An approximate factorization method for inverse medium scattering with unknown buried objects
NASA Astrophysics Data System (ADS)
Qu, Fenglong; Yang, Jiaqing; Zhang, Bo
2017-03-01
This paper is concerned with the inverse problem of scattering of time-harmonic acoustic waves by an inhomogeneous medium with different kinds of unknown buried objects inside. By constructing a sequence of operators which are small perturbations of the far-field operator in a suitable way, we prove that each operator in this sequence has a factorization satisfying the Range Identity. We then develop an approximate factorization method for recovering the support of the inhomogeneous medium from the far-field data. Finally, numerical examples are provided to illustrate the practicability of the inversion algorithm.
Gu, M G; Kong, F H
1998-06-23
We propose a general procedure for solving incomplete data estimation problems. The procedure can be used to find the maximum likelihood estimate or to solve estimating equations in difficult cases such as estimation with the censored or truncated regression model, the nonlinear structural measurement error model, and the random effects model. The procedure is based on the general principle of stochastic approximation and the Markov chain Monte-Carlo method. Applying the theory on adaptive algorithms, we derive conditions under which the proposed procedure converges. Simulation studies also indicate that the proposed procedure consistently converges to the maximum likelihood estimate for the structural measurement error logistic regression model.
Approximate direct reduction method: infinite series reductions to the perturbed mKdV equation
NASA Astrophysics Data System (ADS)
Jiao, Xiao-Yu; Lou, Sen-Yue
2009-09-01
The approximate direct reduction method is applied to the perturbed mKdV equation with weak fourth order dispersion and weak dissipation. The similarity reduction solutions of different orders conform to formal coherence, accounting for infinite series reduction solutions to the original equation and general formulas of similarity reduction equations. Painlevé II type equations, hyperbolic secant and Jacobi elliptic function solutions are obtained for zero-order similarity reduction equations. Higher order similarity reduction equations are linear variable coefficient ordinary differential equations.
An Approximate Method for Analysis of Solitary Waves in Nonlinear Elastic Materials
NASA Astrophysics Data System (ADS)
Rushchitsky, J. J.; Yurchuk, V. N.
2016-05-01
Two types of solitary elastic waves are considered: a longitudinal plane displacement wave (longitudinal displacements along the abscissa axis of a Cartesian coordinate system) and a radial cylindrical displacement wave (displacements in the radial direction of a cylindrical coordinate system). The basic innovation is the use of nonlinear wave equations similar in form to describe these waves and the use of the same approximate method to analyze these equations. The distortion of the wave profile described by Whittaker (plane wave) or Macdonald (cylindrical wave) functions is described theoretically
Approximate method for calculating free vibrations of a large-wind-turbine tower structure
NASA Technical Reports Server (NTRS)
Das, S. C.; Linscott, B. S.
1977-01-01
A set of ordinary differential equations were derived for a simplified structural dynamic lumped-mass model of a typical large-wind-turbine tower structure. Dunkerley's equation was used to arrive at a solution for the fundamental natural frequencies of the tower in bending and torsion. The ERDA-NASA 100-kW wind turbine tower structure was modeled, and the fundamental frequencies were determined by the simplified method described. The approximate fundamental natural frequencies for the tower agree within 18 percent with test data and predictions analyzed.
Ghosh, Debashree
2014-03-07
Hybrid quantum mechanics/molecular mechanics (QM/MM) methods provide an attractive way to closely retain the accuracy of the QM method with the favorable computational scaling of the MM method. Therefore, it is not surprising that QM/MM methods are being increasingly used for large chemical/biological systems. Hybrid equation of motion coupled cluster singles doubles/effective fragment potential (EOM-CCSD/EFP) methods have been developed over the last few years to understand the effect of solvents and other condensed phases on the electronic spectra of chromophores. However, the computational cost of this approach is still dominated by the steep scaling of the EOM-CCSD method. In this work, we propose and implement perturbative approximations to the EOM-CCSD method in this hybrid scheme to reduce the cost of EOM-CCSD/EFP. The timings and accuracy of this hybrid approach is tested for calculation of ionization energies, excitation energies, and electron affinities of microsolvated nucleic acid bases (thymine and cytosine), phenol, and phenolate.
Fine Mapping Causal Variants with an Approximate Bayesian Method Using Marginal Test Statistics.
Chen, Wenan; Larrabee, Beth R; Ovsyannikova, Inna G; Kennedy, Richard B; Haralambieva, Iana H; Poland, Gregory A; Schaid, Daniel J
2015-07-01
Two recently developed fine-mapping methods, CAVIAR and PAINTOR, demonstrate better performance over other fine-mapping methods. They also have the advantage of using only the marginal test statistics and the correlation among SNPs. Both methods leverage the fact that the marginal test statistics asymptotically follow a multivariate normal distribution and are likelihood based. However, their relationship with Bayesian fine mapping, such as BIMBAM, is not clear. In this study, we first show that CAVIAR and BIMBAM are actually approximately equivalent to each other. This leads to a fine-mapping method using marginal test statistics in the Bayesian framework, which we call CAVIAR Bayes factor (CAVIARBF). Another advantage of the Bayesian framework is that it can answer both association and fine-mapping questions. We also used simulations to compare CAVIARBF with other methods under different numbers of causal variants. The results showed that both CAVIARBF and BIMBAM have better performance than PAINTOR and other methods. Compared to BIMBAM, CAVIARBF has the advantage of using only marginal test statistics and takes about one-quarter to one-fifth of the running time. We applied different methods on two independent cohorts of the same phenotype. Results showed that CAVIARBF, BIMBAM, and PAINTOR selected the same top 3 SNPs; however, CAVIARBF and BIMBAM had better consistency in selecting the top 10 ranked SNPs between the two cohorts. Software is available at https://bitbucket.org/Wenan/caviarbf.
NASA Astrophysics Data System (ADS)
Liu, Jie; Sun, Xingsheng; Li, Kun; Jiang, Chao; Han, Xu
2015-11-01
Aiming at structures containing random parameters with multi-peak probability density functions (PDFs) or great variable coefficients, an analytical method of probability density function discretization and approximation (PDFDA) is proposed for dynamic load identification. Dynamic loads are expressed as the functions of time and random parameters in time domain and the forward model is established through the discretized convolution integral of loads and the corresponding unit-pulse response functions. The PDF of each random parameter is discretized into several subintervals and in each subinterval the original PDF curve is approximated via uniform distribution PDF with equal probability value. Then the joint distribution model is built and hence the equivalent deterministic equations are solved to identify unknown loads. Inverse analysis is operated separately at each variable in the joint distribution model through regularization because of noise-contaminated measured responses. In order to assess the accuracy of identified results, PDF curves and statistical properties of loads are achieved based on the specially assumed distributions of identified loads. Numerical simulations demonstrate the efficiency and superiority of the presented method.
Estimating the Bias of Local Polynomial Approximation Methods Using the Peano Kernel
Blair, J.; Machorro, E.; Luttman, A.
2013-03-01
The determination of uncertainty of an estimate requires both the variance and the bias of the estimate. Calculating the variance of local polynomial approximation (LPA) estimates is straightforward. We present a method, using the Peano Kernel Theorem, to estimate the bias of LPA estimates and show how this can be used to optimize the LPA parameters in terms of the bias-variance tradeoff. Figures of merit are derived and values calculated for several common methods. The results in the literature are expanded by giving bias error bounds that are valid for all lengths of the smoothing interval, generalizing the currently available asymptotic results that are only valid in the limit as the length of this interval goes to zero.
Approximate method for solving relaxation problems in terms of material`s damagability under creep
Nikitenko, A.F.; Sukhorukov, I.V.
1995-03-01
The technology of thermoforming under creep and superplasticity conditions is finding increasing application in machine building for producing articles of a preset shape. After a part is made there are residual stresses in it, which lead to its warping. To remove residual stresses, moulded articles are usually exposed to thermal fixation, i.e., the part is held in compressed state at a certain temperature. Thermal fixation is simply the process of residual stress relaxation, following by accumulation of total creep in the material. Therefore the necessity to develop engineering methods for calculating the time of thermal fixation and relaxation of residual stresses to a safe level, not resulting in warping, becomes evident. The authors present an approximate method of calculation of stress-strain rate of a body during relaxation. They use a system of equations which describes a material`s creep, simultaneously taking into account accumulation of damages in it.
Mariño, Inés P; Míguez, Joaquín
2005-11-01
We introduce a numerical approximation method for estimating an unknown parameter of a (primary) chaotic system which is partially observed through a scalar time series. Specifically, we show that the recursive minimization of a suitably designed cost function that involves the dynamic state of a fully observed (secondary) system and the observed time series can lead to the identical synchronization of the two systems and the accurate estimation of the unknown parameter. The salient feature of the proposed technique is that the only external input to the secondary system is the unknown parameter which needs to be adjusted. We present numerical examples for the Lorenz system which show how our algorithm can be considerably faster than some previously proposed methods.
Novaes, T F; Matos, R; Raggio, D P; Imparato, J C P; Braga, M M; Mendes, F M
2010-01-01
This in vivo study aimed to evaluate the performance of methods of approximal caries detection in primary molars and to assess the influence of the discomfort caused by these methods on their performance. Two examiners evaluated 76 children (4-12 years old) using visual inspection (ICDAS), radiography and a laser fluorescence device (DIAGNOdent pen, LFpen). The reference standard was visual inspection after temporary separation with orthodontic rubbers. Surfaces were classified as sound, noncavitated (NC) or cavitated (Cav), and performance was assessed at both NC and Cav thresholds. Wong-Baker faces scale was employed to assess the discomfort. Multilevel analysis was performed to verify the influence of discomfort on performance, considering the number of false-positives and false-negatives as outcome. At NC threshold, visual inspection achieved better performance (sensitivities and accuracies around 0.67) than other methods (sensitivities around 0.25 and accuracies around 0.35). At Cav threshold, visual inspection presented lower sensitivity (0.23 and 0.19), and LFpen (0.52 and 0.42) and radiography (0.52) presented similar sensitivities. Concerning the influence of the discomfort, at NC threshold, when discomfort was present, the number of false-negative results was lower with LFpen and the number of false-positive results was higher with visual inspection. At Cav threshold, the number of false-positive results was higher with LFpen. In conclusion, radiography and LFpen achieved similar performance in detecting approximal caries lesions in primary teeth and the discomfort caused by visual inspection and LFpen can influence the performance of these methods, since a higher number of false-positive or false-negative results occurred in children who reported discomfort. Copyright © 2010 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Zhang, Ji; Ding, Mingyue; Yuchi, Ming; Hou, Wenguang; Ye, Huashan; Qiu, Wu
2010-03-01
Factor analysis is an efficient technique to the analysis of dynamic structures in medical image sequences and recently has been used in contrast-enhanced ultrasound (CEUS) of hepatic perfusion. Time-intensity curves (TICs) extracted by factor analysis can provide much more diagnostic information for radiologists and improve the diagnostic rate of focal liver lesions (FLLs). However, one of the major drawbacks of factor analysis of dynamic structures (FADS) is nonuniqueness of the result when only the non-negativity criterion is used. In this paper, we propose a new method of replace-approximation based on apex-seeking for ambiguous FADS solutions. Due to a partial overlap of different structures, factor curves are assumed to be approximately replaced by the curves existing in medical image sequences. Therefore, how to find optimal curves is the key point of the technique. No matter how many structures are assumed, our method always starts to seek apexes from one-dimensional space where the original high-dimensional data is mapped. By finding two stable apexes from one dimensional space, the method can ascertain the third one. The process can be continued until all structures are found. This technique were tested on two phantoms of blood perfusion and compared to the two variants of apex-seeking method. The results showed that the technique outperformed two variants in comparison of region of interest measurements from phantom data. It can be applied to the estimation of TICs derived from CEUS images and separation of different physiological regions in hepatic perfusion.
Delving Into Dissipative Quantum Dynamics: From Approximate to Numerically Exact Approaches
NASA Astrophysics Data System (ADS)
Chen, Hsing-Ta
In this thesis, I explore dissipative quantum dynamics of several prototypical model systems via various approaches, ranging from approximate to numerically exact schemes. In particular, in the realm of the approximate I explore the accuracy of Pade-resummed master equations and the fewest switches surface hopping (FSSH) algorithm for the spin-boson model, and non-crossing approximations (NCA) for the Anderson-Holstein model. Next, I develop new and exact Monte Carlo approaches and test them on the spin-boson model. I propose well-defined criteria for assessing the accuracy of Pade-resummed quantum master equations, which correctly demarcate the regions of parameter space where the Pade approximation is reliable. I continue the investigation of spin-boson dynamics by benchmark comparisons of the semiclassical FSSH algorithm to exact dynamics over a wide range of parameters. Despite small deviations from golden-rule scaling in the Marcus regime, standard surface hopping algorithm is found to be accurate over a large portion of parameter space. The inclusion of decoherence corrections via the augmented FSSH algorithm improves the accuracy of dynamical behavior compared to exact simulations, but the effects are generally not dramatic for the cases I consider. Next, I introduce new methods for numerically exact real-time simulation based on real-time diagrammatic Quantum Monte Carlo (dQMC) and the inchworm algorithm. These methods optimally recycle Monte Carlo information from earlier times to greatly suppress the dynamical sign problem. In the context of the spin-boson model, I formulate the inchworm expansion in two distinct ways: the first with respect to an expansion in the system-bath coupling and the second as an expansion in the diabatic coupling. In addition, a cumulant version of the inchworm Monte Carlo method is motivated by the latter expansion, which allows for further suppression of the growth of the sign error. I provide a comprehensive comparison of the
NASA Astrophysics Data System (ADS)
Kaporin, I. E.
2012-02-01
In order to precondition a sparse symmetric positive definite matrix, its approximate inverse is examined, which is represented as the product of two sparse mutually adjoint triangular matrices. In this way, the solution of the corresponding system of linear algebraic equations (SLAE) by applying the preconditioned conjugate gradient method (CGM) is reduced to performing only elementary vector operations and calculating sparse matrix-vector products. A method for constructing the above preconditioner is described and analyzed. The triangular factor has a fixed sparsity pattern and is optimal in the sense that the preconditioned matrix has a minimum K-condition number. The use of polynomial preconditioning based on Chebyshev polynomials makes it possible to considerably reduce the amount of scalar product operations (at the cost of an insignificant increase in the total number of arithmetic operations). The possibility of an efficient massively parallel implementation of the resulting method for solving SLAEs is discussed. For a sequential version of this method, the results obtained by solving 56 test problems from the Florida sparse matrix collection (which are large-scale and ill-conditioned) are presented. These results show that the method is highly reliable and has low computational costs.
NASA Astrophysics Data System (ADS)
Roudi, Yasser; Tyrcha, Joanna; Hertz, John
2009-05-01
We study pairwise Ising models for describing the statistics of multineuron spike trains, using data from a simulated cortical network. We explore efficient ways of finding the optimal couplings in these models and examine their statistical properties. To do this, we extract the optimal couplings for subsets of size up to 200 neurons, essentially exactly, using Boltzmann learning. We then study the quality of several approximate methods for finding the couplings by comparing their results with those found from Boltzmann learning. Two of these methods—inversion of the Thouless-Anderson-Palmer equations and an approximation proposed by Sessak and Monasson—are remarkably accurate. Using these approximations for larger subsets of neurons, we find that extracting couplings using data from a subset smaller than the full network tends systematically to overestimate their magnitude. This effect is described qualitatively by infinite-range spin-glass theory for the normal phase. We also show that a globally correlated input to the neurons in the network leads to a small increase in the average coupling. However, the pair-to-pair variation in the couplings is much larger than this and reflects intrinsic properties of the network. Finally, we study the quality of these models by comparing their entropies with that of the data. We find that they perform well for small subsets of the neurons in the network, but the fit quality starts to deteriorate as the subset size grows, signaling the need to include higher-order correlations to describe the statistics of large networks.
NASA Astrophysics Data System (ADS)
Frantz, Eric Randall
Elongation and shaping of the tokamak plasma cross -section can allow increased beta and other favorable improvements. As the cross-section is made non-circular, however, the plasma can become unstable against axisymmetric motions, the most predominant one being a nearly uniform displacement in the direction of elongation. Without additional stabilizing mechanisms, this instability has growth rates typically (TURN)10('6)sec('-1). With passive and active feedback from external conductors, the plasma can be significantly slowed down and controlled. In this work, a mathematical formulism for analyzing the vertical instability is developed in which the external conductors are treated (or broken -up) as discrete coils. The circuit equations for the plasma induced currents can be included within the same mathematical framework. The plasma equation of motion and the circuit equations are combined and manipulated into a diagonalized form that can be graphically analyzed to determine the growth rate. An effective mode approximation (EMA) to the dispersion relation in introduced to simplify and approximate the growth rate of the more exact case. Controller voltage equations for active feedback are generalized to include position and velocity feedback and time delay. A position cut-off displacement is added to model finite spatial resolution of the position detectors or a dead-band voltage level. Stability criteria are studied for EMA and the more exact case. The time dependent responses for plasma position controller voltages, and currents are determined from the Laplace transformations. Slow responses are separated from the fast ones (dependent on plasma inertia) using a typical tokamak ordering approximation. The methods developed are applied in numerous examples for the machine geometry and plasma of TNS, an inside-D configuration plasma resembling JET, INTOR, or FED.
NASA Astrophysics Data System (ADS)
Sabatier, Romuald; Fossati, Caroline; Bourennane, Salah; Di Giacomo, Antonio
2008-10-01
Model Based Optical Proximity Correction (MBOPC) is since a decade a widely used technique that permits to achieve resolutions on silicon layout smaller than the wave-length which is used in commercially-available photolithography tools. This is an important point, because masks dimensions are continuously shrinking. As for the current masks, several billions of segments have to be moved, and also, several iterations are needed to reach convergence. Therefore, fast and accurate algorithms are mandatory to perform OPC on a mask in a reasonably short time for industrial purposes. As imaging with an optical lithography system is similar to microscopy, the theory used in MBOPC is drawn from the works originally conducted for the theory of microscopy. Fourier Optics was first developed by Abbe to describe the image formed by a microscope and is often referred to as Abbe formulation. This is one of the best methods for optimizing illumination and is used in most of the commercially available lithography simulation packages. Hopkins method, developed later in 1951, is the best method for mask optimization. Consequently, Hopkins formulation, widely used for partially coherent illumination, and thus for lithography, is present in most of the commercially available OPC tools. This formulation has the advantage of a four-way transmission function independent of the mask layout. The values of this function, called Transfer Cross Coefficients (TCC), describe the illumination and projection pupils. Commonly-used algorithms, involving TCC of Hopkins formulation to compute aerial images during MBOPC treatment, are based on TCC decomposition into its eigenvectors using matricization and the well-known Singular Value Decomposition (SVD) tool. These techniques that use numerical approximation and empirical determination of the number of eigenvectors taken into account, could not match reality and lead to an information loss. They also remain highly runtime consuming. We propose an
A force evaluation free method to N-body problems: Binary interaction approximation
NASA Astrophysics Data System (ADS)
Oikawa, S.
2016-03-01
We recently proposed the binary interaction approximation (BIA) to N-body problems, which, in principle, excludes the interparticle force evaluation if the exact solutions are known for the corresponding two-body problems such as the Coulombic and gravitational interactions. In this article, a detailed introduction to the BIA is given, including the error analysis to give the expressions for the approximation error in the total angular momentum and the total energy of the entire system. It is shown that, although the energy conservation of the BIA scheme is worse than the 4th order Hermite integrator (HMT4) for similar elapsed, or the wall-clock times, the individual errors in position and in velocity are much better than HMT4. The energy error correction scheme to the BIA is also introduced that does not deteriorate the individual errors in position and in velocity. It is suggested that the BIA scheme is applicable to the tree method, the particle-mesh (PM), and the particle-particle-particle-mesh (PPPM) schemes simply by replacing the force evaluation and the conventional time integrator with the BIA scheme.
NASA Astrophysics Data System (ADS)
Werner, Hans-Joachim
2016-11-01
The accuracy of multipole approximations for distant pair energies in local second-order Møller-Plesset perturbation theory (LMP2) as introduced by Hetzer et al. [Chem. Phys. Lett. 290, 143 (1998)] is investigated for three chemical reactions involving molecules with up to 92 atoms. Various iterative and non-iterative approaches are compared, using different energy thresholds for distant pair selection. It is demonstrated that the simple non-iterative dipole-dipole approximation, which has been used in several recent pair natural orbitals (PNO)-LMP2 and PNO-LCCSD (local coupled-cluster with singles and doubles) methods, may underestimate the distant pair energies by up to 50% and can lead to significant errors in relative energies, unless very tight thresholds are used. The accuracy can be much improved by including higher multipole orders and by optimizing the distant pair amplitudes iteratively along with all other amplitudes. A new approach is presented in which very small special PNO domains for distant pairs are used in the iterative approach. This reduces the number of distant pair amplitudes by 3 orders of magnitude and keeps the additional computational effort for the iterative optimization of distant pair amplitudes minimal.
Werner, Hans-Joachim
2016-11-28
The accuracy of multipole approximations for distant pair energies in local second-order Møller-Plesset perturbation theory (LMP2) as introduced by Hetzer et al. [Chem. Phys. Lett. 290, 143 (1998)] is investigated for three chemical reactions involving molecules with up to 92 atoms. Various iterative and non-iterative approaches are compared, using different energy thresholds for distant pair selection. It is demonstrated that the simple non-iterative dipole-dipole approximation, which has been used in several recent pair natural orbitals (PNO)-LMP2 and PNO-LCCSD (local coupled-cluster with singles and doubles) methods, may underestimate the distant pair energies by up to 50% and can lead to significant errors in relative energies, unless very tight thresholds are used. The accuracy can be much improved by including higher multipole orders and by optimizing the distant pair amplitudes iteratively along with all other amplitudes. A new approach is presented in which very small special PNO domains for distant pairs are used in the iterative approach. This reduces the number of distant pair amplitudes by 3 orders of magnitude and keeps the additional computational effort for the iterative optimization of distant pair amplitudes minimal.
A novel method of automated skull registration for forensic facial approximation.
Turner, W D; Brown, R E B; Kelliher, T P; Tu, P H; Taister, M A; Miller, K W P
2005-11-25
Modern forensic facial reconstruction techniques are based on an understanding of skeletal variation and tissue depths. These techniques rely upon a skilled practitioner interpreting limited data. To (i) increase the amount of data available and (ii) lessen the subjective interpretation, we use medical imaging and statistical techniques. We introduce a software tool, reality enhancement/facial approximation by computational estimation (RE/FACE) for computer-based forensic facial reconstruction. The tool applies innovative computer-based techniques to a database of human head computed tomography (CT) scans in order to derive a statistical approximation of the soft tissue structure of a questioned skull. A core component of this tool is an algorithm for removing the variation in facial structure due to skeletal variation. This method uses models derived from the CT scans and does not require manual measurement or placement of landmarks. It does not require tissue-depth tables, can be tailored to specific racial categories by adding CT scans, and removes much of the subjectivity of manual reconstructions.
Model-independent mean-field theory as a local method for approximate propagation of information.
Haft, M; Hofmann, R; Tresp, V
1999-02-01
We present a systematic approach to mean-field theory (MFT) in a general probabilistic setting without assuming a particular model. The mean-field equations derived here may serve as a local, and thus very simple, method for approximate inference in probabilistic models such as Boltzmann machines or Bayesian networks. Our approach is 'model-independent' in the sense that we do not assume a particular type of dependences; in a Bayesian network, for example, we allow arbitrary tables to specify conditional dependences. In general, there are multiple solutions to the mean-field equations. We show that improved estimates can be obtained by forming a weighted mixture of the multiple mean-field solutions. Simple approximate expressions for the mixture weights are given. The general formalism derived so far is evaluated for the special case of Bayesian networks. The benefits of taking into account multiple solutions are demonstrated by using MFT for inference in a small and in a very large Bayesian network. The results are compared with the exact results.
An improved approximate-Bayesian model-choice method for estimating shared evolutionary history
2014-01-01
Background To understand biological diversification, it is important to account for large-scale processes that affect the evolutionary history of groups of co-distributed populations of organisms. Such events predict temporally clustered divergences times, a pattern that can be estimated using genetic data from co-distributed species. I introduce a new approximate-Bayesian method for comparative phylogeographical model-choice that estimates the temporal distribution of divergences across taxa from multi-locus DNA sequence data. The model is an extension of that implemented in msBayes. Results By reparameterizing the model, introducing more flexible priors on demographic and divergence-time parameters, and implementing a non-parametric Dirichlet-process prior over divergence models, I improved the robustness, accuracy, and power of the method for estimating shared evolutionary history across taxa. Conclusions The results demonstrate the improved performance of the new method is due to (1) more appropriate priors on divergence-time and demographic parameters that avoid prohibitively small marginal likelihoods for models with more divergence events, and (2) the Dirichlet-process providing a flexible prior on divergence histories that does not strongly disfavor models with intermediate numbers of divergence events. The new method yields more robust estimates of posterior uncertainty, and thus greatly reduces the tendency to incorrectly estimate models of shared evolutionary history with strong support. PMID:24992937
Approximate analytic method for high-apogee twelve-hour orbits of artificial Earth's satellites
NASA Astrophysics Data System (ADS)
Vashkovyaka, M. A.; Zaslavskii, G. S.
2016-09-01
We propose an approach to the study of the evolution of high-apogee twelve-hour orbits of artificial Earth's satellites. We describe parameters of the motion model used for the artificial Earth's satellite such that the principal gravitational perturbations of the Moon and Sun, nonsphericity of the Earth, and perturbations from the light pressure force are approximately taken into account. To solve the system of averaged equations describing the evolution of the orbit parameters of an artificial satellite, we use both numeric and analytic methods. To select initial parameters of the twelve-hour orbit, we assume that the path of the satellite along the surface of the Earth is stable. Results obtained by the analytic method and by the numerical integration of the evolving system are compared. For intervals of several years, we obtain estimates of oscillation periods and amplitudes for orbital elements. To verify the results and estimate the precision of the method, we use the numerical integration of rigorous (not averaged) equations of motion of the artificial satellite: they take into account forces acting on the satellite substantially more completely and precisely. The described method can be applied not only to the investigation of orbit evolutions of artificial satellites of the Earth; it can be applied to the investigation of the orbit evolution for other planets of the Solar system provided that the corresponding research problem will arise in the future and the considered special class of resonance orbits of satellites will be used for that purpose.
NASA Astrophysics Data System (ADS)
Heßelmann, Andreas
2017-05-01
A random-phase approximation electron correlation method including exchange interactions has been developed which reduces the scaling behaviour of the standard approach by two to four orders of magnitude, effectively leading to a linear scaling performance if the local structures of the underlying quantities are fully exploited in the calculations. This has been achieved by a transformation of the integrals and amplitudes from the canonical orbital basis into a local orbital basis and a subsequent dyadic screening approach. The performance of the method is demonstrated for a range of tripeptide molecules as well as for two conformers of the polyglycine molecule using up to 40 glycine units. While a reasonable agreement with the corresponding canonical method is obtained if long-range Coulomb interactions are not screened by the local method, a significant improvement in the performance is achieved for larger systems beyond 20 glycine units. Furthermore, the control of the Coulomb screening threshold allows for a quantification of intramolecular dispersion interactions, as will be exemplified for the polyglycine conformers as well as a highly branched hexaphenylethane derivate which is stabilised by steric crowding effects.
A new embedded-atom method approach based on the pth moment approximation
NASA Astrophysics Data System (ADS)
Wang, Kun; Zhu, Wenjun; Xiao, Shifang; Chen, Jun; Hu, Wangyu
2016-12-01
Large scale atomistic simulations with suitable interatomic potentials are widely employed by scientists or engineers of different areas. The quick generation of high-quality interatomic potentials is urgently needed. This largely relies on the developments of potential construction methods and algorithms in this area. Quantities of interatomic potential models have been proposed and parameterized with various methods, such as the analytic method, the force-matching approach and multi-object optimization method, in order to make the potentials more transferable. Without apparently lowering the precision for describing the target system, potentials of fewer fitting parameters (FPs) are somewhat more physically reasonable. Thus, studying methods to reduce the FP number is helpful in understanding the underlying physics of simulated systems and improving the precision of potential models. In this work, we propose an embedded-atom method (EAM) potential model consisting of a new manybody term based on the pth moment approximation to the tight binding theory and the general transformation invariance of EAM potentials, and an energy modification term represented by pairwise interactions. The pairwise interactions are evaluated by an analytic-numerical scheme without the need to know their functional forms a priori. By constructing three potentials of aluminum and comparing them with a commonly used EAM potential model, several wonderful results are obtained. First, without losing the precision of potentials, our potential of aluminum has fewer potential parameters and a smaller cutoff distance when compared with some constantly-used potentials of aluminum. This is because several physical quantities, usually serving as target quantities to match in other potentials, seem to be uniquely dependent on quantities contained in our basic reference database within the new potential model. Second, a key empirical parameter in the embedding term of the commonly used EAM model is
A new embedded-atom method approach based on the pth moment approximation.
Wang, Kun; Zhu, Wenjun; Xiao, Shifang; Chen, Jun; Hu, Wangyu
2016-12-21
Large scale atomistic simulations with suitable interatomic potentials are widely employed by scientists or engineers of different areas. The quick generation of high-quality interatomic potentials is urgently needed. This largely relies on the developments of potential construction methods and algorithms in this area. Quantities of interatomic potential models have been proposed and parameterized with various methods, such as the analytic method, the force-matching approach and multi-object optimization method, in order to make the potentials more transferable. Without apparently lowering the precision for describing the target system, potentials of fewer fitting parameters (FPs) are somewhat more physically reasonable. Thus, studying methods to reduce the FP number is helpful in understanding the underlying physics of simulated systems and improving the precision of potential models. In this work, we propose an embedded-atom method (EAM) potential model consisting of a new manybody term based on the pth moment approximation to the tight binding theory and the general transformation invariance of EAM potentials, and an energy modification term represented by pairwise interactions. The pairwise interactions are evaluated by an analytic-numerical scheme without the need to know their functional forms a priori. By constructing three potentials of aluminum and comparing them with a commonly used EAM potential model, several wonderful results are obtained. First, without losing the precision of potentials, our potential of aluminum has fewer potential parameters and a smaller cutoff distance when compared with some constantly-used potentials of aluminum. This is because several physical quantities, usually serving as target quantities to match in other potentials, seem to be uniquely dependent on quantities contained in our basic reference database within the new potential model. Second, a key empirical parameter in the embedding term of the commonly used EAM model is
Rational trigonometric approximations using Fourier series partial sums
NASA Technical Reports Server (NTRS)
Geer, James F.
1993-01-01
A class of approximations (S(sub N,M)) to a periodic function f which uses the ideas of Pade, or rational function, approximations based on the Fourier series representation of f, rather than on the Taylor series representation of f, is introduced and studied. Each approximation S(sub N,M) is the quotient of a trigonometric polynomial of degree N and a trigonometric polynomial of degree M. The coefficients in these polynomials are determined by requiring that an appropriate number of the Fourier coefficients of S(sub N,M) agree with those of f. Explicit expressions are derived for these coefficients in terms of the Fourier coefficients of f. It is proven that these 'Fourier-Pade' approximations converge point-wise to (f(x(exp +))+f(x(exp -)))/2 more rapidly (in some cases by a factor of 1/k(exp 2M)) than the Fourier series partial sums on which they are based. The approximations are illustrated by several examples and an application to the solution of an initial, boundary value problem for the simple heat equation is presented.
An approximate method for analyzing transient condensation on spray in HYLIFE-II
Bai, R.Y.; Schrock, V.E. . Dept. of Nuclear Engineering)
1990-01-01
The HYLIFE-II conceptual design calls for analysis of highly transient condensation on droplets to achieve a rapidly decaying pressure field. Drops exposed to the required transient vapor pressure field are first heated by condensation but later begin to reevaporate after the vapor temperature falls below the drop surface temperature. An approximate method of analysis has been developed based on the assumption that the thermal resistance is concentrated in the liquid. The time dependent boundary condition is treated via the Duhamel integral for the pure conduction model. The resulting Nusselt number is enhanced to account for convection within the drop and then used to predict the drop mean temperature history. Many histories are considered to determine the spray rate necessary to achieve the required complete condensation.
Relaxation and approximate factorization methods for the unsteady full potential equation
NASA Technical Reports Server (NTRS)
Shankar, V.; Ide, H.; Gorski, J.
1984-01-01
The unsteady form of the full potential equation is solved in conservation form, using implicit methods based on approximate factorization and relaxation schemes. A local time linearization for density is introduced to enable solution to the equation in terms of phi, the velocity potential. A novel flux-biasing technique is applied to generate proper forms of the artificial viscosity, to treat hyperbolic regions with shocks and sonic lines present. The wake is properly modeled by accounting not only for jumps in phi, but also for jumps in higher derivatives of phi obtained from requirements of density continuity. The far field is modeled using the Riemann invariants to simulate nonreflecting boundary conditions. Results are presented for flows over airfoils, cylinders, and spheres. Comparisons are made with available Euler and full potential results.
Improving the Accuracy of the Chebyshev Rational Approximation Method Using Substeps
Isotalo, Aarno; Pusa, Maria
2016-05-01
The Chebyshev Rational Approximation Method (CRAM) for solving the decay and depletion of nuclides is shown to have a remarkable decrease in error when advancing the system with the same time step and microscopic reaction rates as the previous step. This property is exploited here to achieve high accuracy in any end-of-step solution by dividing a step into equidistant sub-steps. The computational cost of identical substeps can be reduced significantly below that of an equal number of regular steps, as the LU decompositions for the linear solves required in CRAM only need to be formed on the first substep. The improved accuracy provided by substeps is most relevant in decay calculations, where there have previously been concerns about the accuracy and generality of CRAM. Lastly, with substeps, CRAM can solve any decay or depletion problem with constant microscopic reaction rates to an extremely high accuracy for all nuclides with concentrations above an arbitrary limit.
Atomistic modelling of nanostructures via the Bozzolo Ferrante Smith quantum approximate method
NASA Astrophysics Data System (ADS)
Bozzolo, Guillermo; Garcés, Jorge E.; Noebe, Ronald D.; Farías, Daniel
2003-09-01
Ideally, computational modelling techniques for nanoscopic physics would be able to perform free of limitations on the type and number of elements, while providing comparable accuracy when dealing with bulk or surface problems. Computational efficiency is also desirable, if not mandatory, for properly dealing with the complexity of typical nanostructured systems. A quantum approximate technique, the Bozzolo-Ferrante-Smith method for alloys, which attempts to meet these demands, is introduced for calculation of the energetics of nanostructures. The versatility of the technique is demonstrated through analysis of diverse systems, including multiphase precipitation in a five-element Ni-Al-Ti-Cr-Cu alloy and the formation of mixed composition Co-Cu islands on a metallic Cu(111) substrate.
Approximation methods of European option pricing in multiscale stochastic volatility model
NASA Astrophysics Data System (ADS)
Ni, Ying; Canhanga, Betuel; Malyarenko, Anatoliy; Silvestrov, Sergei
2017-01-01
In the classical Black-Scholes model for financial option pricing, the asset price follows a geometric Brownian motion with constant volatility. Empirical findings such as volatility smile/skew, fat-tailed asset return distributions have suggested that the constant volatility assumption might not be realistic. A general stochastic volatility model, e.g. Heston model, GARCH model and SABR volatility model, in which the variance/volatility itself follows typically a mean-reverting stochastic process, has shown to be superior in terms of capturing the empirical facts. However in order to capture more features of the volatility smile a two-factor, of double Heston type, stochastic volatility model is more useful as shown in Christoffersen, Heston and Jacobs [12]. We consider one modified form of such two-factor volatility models in which the volatility has multiscale mean-reversion rates. Our model contains two mean-reverting volatility processes with a fast and a slow reverting rate respectively. We consider the European option pricing problem under one type of the multiscale stochastic volatility model where the two volatility processes act as independent factors in the asset price process. The novelty in this paper is an approximating analytical solution using asymptotic expansion method which extends the authors earlier research in Canhanga et al. [5, 6]. In addition we propose a numerical approximating solution using Monte-Carlo simulation. For completeness and for comparison we also implement the semi-analytical solution by Chiarella and Ziveyi [11] using method of characteristics, Fourier and bivariate Laplace transforms.
Diffusion approximation-based simulation of stochastic ion channels: which method to use?
Pezo, Danilo; Soudry, Daniel; Orio, Patricio
2014-01-01
To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie's method for Markov Chains (MC) simulation is highly accurate, yet it becomes computationally intensive in the regime of a high number of channels. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA). Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties—such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Goldwyn et al., 2011; Linaro et al., 2011; Dangerfield et al., 2012; Orio and Soudry, 2012; Schmandt and Galán, 2012; Güler, 2013; Huang et al., 2013a), comparing all of them in a set of numerical simulations that assess numerical accuracy and computational efficiency on three different models: (1) the original Hodgkin and Huxley model, (2) a model with faster sodium channels, and (3) a multi-compartmental model inspired in granular cells. We conclude that for a low number of channels (usually below 1000 per simulated compartment) one should use MC—which is the fastest and most accurate method. For a high number of channels, we recommend using the method by Orio and Soudry (2012), possibly combined with the method by Schmandt and Galán (2012) for increased speed and slightly reduced accuracy. Consequently, MC modeling may be the best method for detailed multicompartment neuron models—in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels. PMID:25404914
A linearly approximated iterative Gaussian decomposition method for waveform LiDAR processing
NASA Astrophysics Data System (ADS)
Mountrakis, Giorgos; Li, Yuguang
2017-07-01
Full-waveform LiDAR (FWL) decomposition results often act as the basis for key LiDAR-derived products, for example canopy height, biomass and carbon pool estimation, leaf area index calculation and under canopy detection. To date, the prevailing method for FWL product creation is the Gaussian Decomposition (GD) based on a non-linear Levenberg-Marquardt (LM) optimization for Gaussian node parameter estimation. GD follows a ;greedy; approach that may leave weak nodes undetected, merge multiple nodes into one or separate a noisy single node into multiple ones. In this manuscript, we propose an alternative decomposition method called Linearly Approximated Iterative Gaussian Decomposition (LAIGD method). The novelty of the LAIGD method is that it follows a multi-step ;slow-and-steady; iterative structure, where new Gaussian nodes are quickly discovered and adjusted using a linear fitting technique before they are forwarded for a non-linear optimization. Two experiments were conducted, one using real full-waveform data from NASA's land, vegetation, and ice sensor (LVIS) and another using synthetic data containing different number of nodes and degrees of overlap to assess performance in variable signal complexity. LVIS data revealed considerable improvements in RMSE (44.8% lower), RSE (56.3% lower) and rRMSE (74.3% lower) values compared to the benchmark GD method. These results were further confirmed with the synthetic data. Furthermore, the proposed multi-step method reduces execution times in half, an important consideration as there are plans for global coverage with the upcoming Global Ecosystem Dynamics Investigation LiDAR sensor on the International Space Station.
Thermodynamic potential of the periodic Anderson model with the X-boson method: chain approximation
NASA Astrophysics Data System (ADS)
Franco, R.; Figueira, M. S.; Foglio, M. E.
2002-05-01
The periodic Anderson model (PAM) in the U→∞ limit has been studied in a previous work employing the cumulant expansion with the hybridization as perturbation (Figueira et al., Phys. Rev. B 50 (1994) 17 933). When the total number of electrons Nt is calculated as a function of the chemical potential μ in the “chain approximation” (CHA), there are three values of the chemical potential μ for each Nt in a small interval of Nt at low T (Physica A 208 (1994) 279). We have recently introduced the “X-boson” method, inspired in the slave boson technique of Coleman, that solves the problem of nonconservation of probability (completeness) in the CHA as well as removing the spurious phase transitions that appear with the slave boson method in the mean field approximation. In the present paper, we show that the X-boson method solves also the problem of the multiple roots of Nt( μ) that appear in the CHA.
NASA Astrophysics Data System (ADS)
Hartikainen, Markus E.; Ojalehto, Vesa; Sahlstedt, Kristian
2015-03-01
Using an interactive multiobjective optimization method called NIMBUS and an approximation method called PAINT, preferable solutions to a five-objective problem of operating a wastewater treatment plant are found. The decision maker giving preference information is an expert in wastewater treatment plant design at the engineering company Pöyry Finland Ltd. The wastewater treatment problem is computationally expensive and requires running a simulator to evaluate the values of the objective functions. This often leads to problems with interactive methods as the decision maker may get frustrated while waiting for new solutions to be computed. Thus, a newly developed PAINT method is used to speed up the iterations of the NIMBUS method. The PAINT method interpolates between a given set of Pareto optimal outcomes and constructs a computationally inexpensive mixed integer linear surrogate problem for the original wastewater treatment problem. With the mixed integer surrogate problem, the time required from the decision maker is comparatively short. In addition, a new IND-NIMBUS® PAINT module is developed to allow the smooth interoperability of the NIMBUS method and the PAINT method.
NASA Astrophysics Data System (ADS)
Wu, Kun; Zhang, Feng; Min, Jinzhong; Yu, Qiu-Run; Wang, Xin-Yue; Ma, Leiming
2016-09-01
The adding method, which could calculate the infrared radiative transfer (IRT) in inhomogeneous atmosphere with multiple layers, has been applied to δ -four-stream discrete-ordinates method (DOM). This scheme is referred as δ -4DDA. However, there is a lack of application for adding method of δ -four-stream spherical harmonic expansion approximation (SHM) to solve infrared radiative transfer through multiple layers. In this paper, the adding method for δ -four-stream SHM (δ -4SDA) will be obtained and the accuracy of it will be evaluated as well. The result of δ -4SDA in an idealized medium with homogeneous optical property is significantly more accurate than that of the adding method for δ -two-stream DOM (δ -2DDA). The relative errors of δ -2DDA can be over 15% in thin optical depths for downward emissivity, while errors of δ -4SDA are bounded by 2%. However, the result of δ -4SDA is slightly less accurate than that of δ -4DDA. In a radiation model with realistic atmospheric profile considering gaseous transmission, the accuracy for heating rate of δ -4SDA is significantly superior than that of δ -2DDA, especially for the cloudy sky. The accuracy for heating rate of δ -4SDA is slightly less accurate than that of δ -4DDA under water cloud conditions, while it is superior than that of δ -4DDA in ice cloud cases. Beside, the computational efficiency of δ -4SDA is higher than that of δ -4DDA.
Improved locality-sensitive hashing method for the approximate nearest neighbor problem
NASA Astrophysics Data System (ADS)
Lu, Ying-Hua; Ma, Ting-Huai; Zhong, Shui-Ming; Cao, Jie; Wang, Xin; Abdullah, Al-Dhelaan
2014-08-01
In recent years, the nearest neighbor search (NNS) problem has been widely used in various interesting applications. Locality-sensitive hashing (LSH), a popular algorithm for the approximate nearest neighbor problem, is proved to be an efficient method to solve the NNS problem in the high-dimensional and large-scale databases. Based on the scheme of p-stable LSH, this paper introduces a novel improvement algorithm called randomness-based locality-sensitive hashing (RLSH) based on p-stable LSH. Our proposed algorithm modifies the query strategy that it randomly selects a certain hash table to project the query point instead of mapping the query point into all hash tables in the period of the nearest neighbor query and reconstructs the candidate points for finding the nearest neighbors. This improvement strategy ensures that RLSH spends less time searching for the nearest neighbors than the p-stable LSH algorithm to keep a high recall. Besides, this strategy is proved to promote the diversity of the candidate points even with fewer hash tables. Experiments are executed on the synthetic dataset and open dataset. The results show that our method can cost less time consumption and less space requirements than the p-stable LSH while balancing the same recall.
NASA Astrophysics Data System (ADS)
Chtioui, Younes; Panigrahi, Suranjan; Marsh, Ronald A.
1998-11-01
The probabilistic neural network (PNN) is based on the estimation of the probability density functions. The estimation of these density functions uses smoothing parameters that represent the width of the activation functions. A two-step numerical procedure is developed for the optimization of the smoothing parameters of the PNN: a rough optimization by the conjugate gradient method and a fine optimization by the approximate Newton method. The thrust is to compare the classification performances of the improved PNN and the standard back-propagation neural network (BPNN). Comparisons are performed on a food quality problem: french fry classification into three different color classes (light, normal, and dark). The optimized PNN correctly classifies 96.19% of the test data, whereas the BPNN classifies only 93.27% of the same data. Moreover, the PNN is more stable than the BPNN with regard to the random initialization. The optimized PNN requires 1464 s for training compared to only 71 s required by the BPNN.
Albrecht, Andreas A; Day, Luke; Abdelhadi Ep Souki, Ouala; Steinhöfel, Kathleen
2016-02-01
The analysis of energy landscapes plays an important role in mathematical modelling, simulation and optimisation. Among the main features of interest are the number and distribution of local minima within the energy landscape. Granier and Kallel proposed in 2002 a new sampling procedure for estimating the number of local minima. In the present paper, we focus on improved heuristic implementations of the general framework devised by Granier and Kallel with regard to run-time behaviour and accuracy of predictions. The new heuristic method is demonstrated for the case of partial energy landscapes induced by RNA secondary structures. While the computation of minimum free energy RNA secondary structures has been studied for a long time, the analysis of folding landscapes has gained momentum over the past years in the context of co-transcriptional folding and deeper insights into cell processes. The new approach has been applied to ten RNA instances of length between 99 nt and 504 nt and their respective partial energy landscapes defined by secondary structures within an energy offset ΔE above the minimum free energy conformation. The number of local minima within the partial energy landscapes ranges from 1440 to 3441. Our heuristic method produces for the best approximations on average a deviation below 3.0% from the true number of local minima.
NASA Astrophysics Data System (ADS)
Lotov, A. V.; Maiskaya, T. S.
2012-01-01
For multicriteria convex optimization problems, new nonadaptive methods are proposed for polyhedral approximation of the multidimensional Edgeworth-Pareto hull (EPH), which is a maximal set having the same Pareto frontier as the set of feasible criteria vectors. The methods are based on evaluating the support function of the EPH for a collection of directions generated by a suboptimal covering on the unit sphere. Such directions are constructed in advance by applying an asymptotically effective adaptive method for the polyhedral approximation of convex compact bodies, namely, by the estimate refinement method. Due to the a priori definition of the directions, the proposed EPH approximation procedure can easily be implemented with parallel computations. Moreover, the use of nonadaptive methods considerably simplifies the organization of EPH approximation on the Internet. Experiments with an applied problem (from 3 to 5 criteria) showed that the methods are fairly similar in characteristics to adaptive methods. Therefore, they can be used in parallel computations and on the Internet.
NASA Technical Reports Server (NTRS)
Pratt, D. T.
1984-01-01
Conventional algorithms for the numerical integration of ordinary differential equations (ODEs) are based on the use of polynomial functions as interpolants. However, the exact solutions of stiff ODEs behave like decaying exponential functions, which are poorly approximated by polynomials. An obvious choice of interpolant are the exponential functions themselves, or their low-order diagonal Pade (rational function) approximants. A number of explicit, A-stable, integration algorithms were derived from the use of a three-parameter exponential function as interpolant, and their relationship to low-order, polynomial-based and rational-function-based implicit and explicit methods were shown by examining their low-order diagonal Pade approximants. A robust implicit formula was derived by exponential fitting the trapezoidal rule. Application of these algorithms to integration of the ODEs governing homogenous, gas-phase chemical kinetics was demonstrated in a developmental code CREK1D, which compares favorably with the Gear-Hindmarsh code LSODE in spite of the use of a primitive stepsize control strategy.
NASA Technical Reports Server (NTRS)
Pratt, D. T.
1984-01-01
Conventional algorithms for the numerical integration of ordinary differential equations (ODEs) are based on the use of polynomial functions as interpolants. However, the exact solutions of stiff ODEs behave like decaying exponential functions, which are poorly approximated by polynomials. An obvious choice of interpolant are the exponential functions themselves, or their low-order diagonal Pade (rational function) approximants. A number of explicit, A-stable, integration algorithms were derived from the use of a three-parameter exponential function as interpolant, and their relationship to low-order, polynomial-based and rational-function-based implicit and explicit methods were shown by examining their low-order diagonal Pade approximants. A robust implicit formula was derived by exponential fitting the trapezoidal rule. Application of these algorithms to integration of the ODEs governing homogenous, gas-phase chemical kinetics was demonstrated in a developmental code CREK1D, which compares favorably with the Gear-Hindmarsh code LSODE in spite of the use of a primitive stepsize control strategy.
NASA Astrophysics Data System (ADS)
Miura, Shinichi; Okazaki, Susumu
2001-09-01
In this paper, the path integral molecular dynamics (PIMD) method has been extended to employ an efficient approximation of the path action referred to as the pair density matrix approximation. Configurations of the isomorphic classical systems were dynamically sampled by introducing fictitious momenta as in the PIMD based on the standard primitive approximation. The indistinguishability of the particles was handled by a pseudopotential of particle permutation that is an extension of our previous one [J. Chem. Phys. 112, 10 116 (2000)]. As a test of our methodology for Boltzmann statistics, calculations have been performed for liquid helium-4 at 4 K. We found that the PIMD with the pair density matrix approximation dramatically reduced the computational cost to obtain the structural as well as dynamical (using the centroid molecular dynamics approximation) properties at the same level of accuracy as that with the primitive approximation. With respect to the identical particles, we performed the calculation of a bosonic triatomic cluster. Unlike the primitive approximation, the pseudopotential scheme based on the pair density matrix approximation described well the bosonic correlation among the interacting atoms. Convergence with a small number of discretization of the path achieved by this approximation enables us to construct a method of avoiding the problem of the vanishing pseudopotential encountered in the calculations by the primitive approximation.
Garvie, Marcus R; Burkardt, John; Morgan, Jeff
2015-03-01
We describe simple finite element schemes for approximating spatially extended predator-prey dynamics with the Holling type II functional response and logistic growth of the prey. The finite element schemes generalize 'Scheme 1' in the paper by Garvie (Bull Math Biol 69(3):931-956, 2007). We present user-friendly, open-source MATLAB code for implementing the finite element methods on arbitrary-shaped two-dimensional domains with Dirichlet, Neumann, Robin, mixed Robin-Neumann, mixed Dirichlet-Neumann, and Periodic boundary conditions. Users can download, edit, and run the codes from http://www.uoguelph.ca/~mgarvie/ . In addition to discussing the well posedness of the model equations, the results of numerical experiments are presented and demonstrate the crucial role that habitat shape, initial data, and the boundary conditions play in determining the spatiotemporal dynamics of predator-prey interactions. As most previous works on this problem have focussed on square domains with standard boundary conditions, our paper makes a significant contribution to the area.
NASA Astrophysics Data System (ADS)
Wang, S.; Zhang, X. N.; Gao, D. D.; Liu, H. X.; Ye, J.; Li, L. R.
2016-08-01
As the solar photovoltaic (PV) power is applied extensively, more attentions are paid to the maintenance and fault diagnosis of PV power plants. Based on analysis of the structure of PV power station, the global partitioned gradually approximation method is proposed as a fault diagnosis algorithm to determine and locate the fault of PV panels. The PV array is divided into 16x16 blocks and numbered. On the basis of modularly processing of the PV array, the current values of each block are analyzed. The mean current value of each block is used for calculating the fault weigh factor. The fault threshold is defined to determine the fault, and the shade is considered to reduce the probability of misjudgments. A fault diagnosis system is designed and implemented with LabVIEW. And it has some functions including the data realtime display, online check, statistics, real-time prediction and fault diagnosis. Through the data from PV plants, the algorithm is verified. The results show that the fault diagnosis results are accurate, and the system works well. The validity and the possibility of the system are verified by the results as well. The developed system will be benefit for the maintenance and management of large scale PV array.
NASA Astrophysics Data System (ADS)
Skorupski, Krzysztof
2015-06-01
Black carbon particles soon after emission interact with organic and inorganic matter. The primary goal of this work was to approximate the accuracy of the DDA method in determining the optical properties of such composites. For the light scattering simulations the ADDA code was selected and the superposition T-Matrix code by Mackowski was used as the reference algorithm. The first part of the study was to compare alternative models of a single primary particle. When only one material is considered the largest averaged relative extinction error is associated with black carbon (δCext ≍ 2.8%). However, for inorganic and organic matter it is lowered to δCext ≍ 0.75%. There is no significant difference between spheres and ellipsoids with the same volume, and therefore, both of them can be used interchangeably. The next step was to investigate aggregates composed of Np = 50 primary particles. When the coating is omitted, the averaged relative extinction error is δCext ≍ 2.6%. Otherwise, it can be lower than δCext < 0.2%.
On the enhancement of the approximation order of triangular Shepard method
NASA Astrophysics Data System (ADS)
Dell'Accio, Francesco; Di Tommaso, Filomena; Hormann, Kai
2016-10-01
Shepard's method is a well-known technique for interpolating large sets of scattered data. The classical Shepard operator reconstructs an unknown function as a normalized blend of the function values at the scattered points, using the inverse distances to the scattered points as weight functions. Based on the general idea of defining interpolants by convex combinations, Little suggested to extend the bivariate Shepard operator in two ways. On the one hand, he considers a triangulation of the scattered points and substitutes function values with linear polynomials which locally interpolate the given data at the vertices of each triangle. On the other hand, he modifies the classical point-based weight functions and defines instead a normalized blend of the locally interpolating polynomials with triangle-based weight functions which depend on the product of inverse distances to the three vertices of the corresponding triangle. The resulting triangular Shepard operator interpolates all data required for its definition and reproduces polynomials up to degree 1, whereas the classical Shepard operator reproduces only constants, and has quadratic approximation order. In this paper we discuss an improvement of the triangular Shepard operator.
Heats of Segregation of BCC Metals Using Ab Initio and Quantum Approximate Methods
NASA Technical Reports Server (NTRS)
Good, Brian; Chaka, Anne; Bozzolo, Guillermo
2003-01-01
Many multicomponent alloys exhibit surface segregation, in which the composition at or near a surface may be substantially different from that of the bulk. A number of phenomenological explanations for this tendency have been suggested, involving, among other things, differences among the components' surface energies, molar volumes, and heats of solution. From a theoretical standpoint, the complexity of the problem has precluded a simple, unified explanation, thus preventing the development of computational tools that would enable the identification of the driving mechanisms for segregation. In that context, we investigate the problem of surface segregation in a variety of bcc metal alloys by computing dilute-limit heats of segregation using both the quantum-approximate energy method of Bozzolo, Ferrante and Smith (BFS), and all-electron density functional theory. In addition, the composition dependence of the heats of segregation is investigated using a BFS-based Monte Carlo procedure, and, for selected cases of interest, density functional calculations. Results are discussed in the context of a simple picture that describes segregation behavior as the result of a competition between size mismatch and alloying effects
Improving the Accuracy of the Chebyshev Rational Approximation Method Using Substeps
Isotalo, Aarno; Pusa, Maria
2016-05-01
The Chebyshev Rational Approximation Method (CRAM) for solving the decay and depletion of nuclides is shown to have a remarkable decrease in error when advancing the system with the same time step and microscopic reaction rates as the previous step. This property is exploited here to achieve high accuracy in any end-of-step solution by dividing a step into equidistant sub-steps. The computational cost of identical substeps can be reduced significantly below that of an equal number of regular steps, as the LU decompositions for the linear solves required in CRAM only need to be formed on the first substep. Themore » improved accuracy provided by substeps is most relevant in decay calculations, where there have previously been concerns about the accuracy and generality of CRAM. Lastly, with substeps, CRAM can solve any decay or depletion problem with constant microscopic reaction rates to an extremely high accuracy for all nuclides with concentrations above an arbitrary limit.« less
A Simple, Approximate Method for Analysis of Kerr-Newman Black Hole Dynamics and Thermodynamics
NASA Astrophysics Data System (ADS)
Pankovic, V.; Ciganovic, S.; Glavatovic, R.
2009-06-01
In this work we present a simple approximate method for analysis of the basic dynamical and thermodynamical characteristics of Kerr-Newman black hole. Instead of the complete dynamics of the black hole self-interaction, we consider only the stable (stationary) dynamical situations determined by condition that the black hole (outer) horizon "circumference" holds the integer number of the reduced Compton wave lengths corresponding to mass spectrum of a small quantum system (representing the quantum of the black hole self-interaction). Then, we show that Kerr-Newman black hole entropy represents simply the ratio of the sum of static part and rotation part of the mass of black hole on one hand, and the ground mass of small quantum system on the other hand. Also we show that Kerr-Newman black hole temperature represents the negative value of the classical potential energy of gravitational interaction between a part of black hole with reduced mass and a small quantum system in the ground mass quantum state. Finally, we suggest a bosonic great canonical distribution of the statistical ensemble of given small quantum systems in the thermodynamical equilibrium with (macroscopic) black hole as thermal reservoir. We suggest that, practically, only the ground mass quantum state is significantly degenerate while all the other, excited mass quantum states, are non-degenerate. Kerr-Newman black hole entropy is practically equivalent to the ground mass quantum state degeneration. Given statistical distribution admits a rough (qualitative) but simple modeling of Hawking radiation of the black hole too.
NASA Astrophysics Data System (ADS)
Kopka, P.; Wawrzynczak, A.; Borysiewicz, M.
2015-09-01
In many areas of application, a central problem is a solution to the inverse problem, especially estimation of the unknown model parameters to model the underlying dynamics of a physical system precisely. In this situation, the Bayesian inference is a powerful tool to combine observed data with prior knowledge to gain the probability distribution of searched parameters. We have applied the modern methodology named Sequential Approximate Bayesian Computation (S-ABC) to the problem of tracing the atmospheric contaminant source. The ABC is technique commonly used in the Bayesian analysis of complex models and dynamic system. Sequential methods can significantly increase the efficiency of the ABC. In the presented algorithm, the input data are the on-line arriving concentrations of released substance registered by distributed sensor network from OVER-LAND ATMOSPHERIC DISPERSION (OLAD) experiment. The algorithm output are the probability distributions of a contamination source parameters i.e. its particular location, release rate, speed and direction of the movement, start time and duration. The stochastic approach presented in this paper is completely general and can be used in other fields where the parameters of the model bet fitted to the observable data should be found.
NASA Astrophysics Data System (ADS)
Xu, Chuanju; Lin, Yumin
2000-03-01
Based on a new global variational formulation, a spectral element approximation of the incompressible Navier-Stokes/Euler coupled problem gives rise to a global discrete saddle problem. The classical Uzawa algorithm decouples the original saddle problem into two positive definite symmetric systems. Iterative solutions of such systems are feasible and attractive for large problems. It is shown that, provided an appropriate pre-conditioner is chosen for the pressure system, the nested conjugate gradient methods can be applied to obtain rapid convergence rates. Detailed numerical examples are given to prove the quality of the pre-conditioner. Thanks to the rapid iterative convergence, the global Uzawa algorithm takes advantage of this as compared with the classical iteration by sub-domain procedures. Furthermore, a generalization of the pre-conditioned iterative algorithm to flow simulation is carried out. Comparisons of computational complexity between the Navier-Stokes/Euler coupled solution and the full Navier-Stokes solution are made. It is shown that the gain obtained by using the Navier-Stokes/Euler coupled solution is generally considerable. Copyright
NASA Technical Reports Server (NTRS)
Jones, Frank C.
1991-01-01
The weighted-slab method is modified so that, although it is still not exact, it gives the 'best' approximation in a minimization sense when energy loss cannot be neglected. In this approximation the species-dependent energy-change term that operates on the path-length distribution is 'averaged' over the slab-model solution for that particular energy and path length.
An angularly refineable phase space finite element method with approximate sweeping procedure
Kophazi, J.; Lathouwers, D.
2013-07-01
An angularly refineable phase space finite element method is proposed to solve the neutron transport equation. The method combines the advantages of two recently published schemes. The angular domain is discretized into small patches and patch-wise discontinuous angular basis functions are restricted to these patches, i.e. there is no overlap between basis functions corresponding to different patches. This approach yields block diagonal Jacobians with small block size and retains the possibility for S{sub n}-like approximate sweeping of the spatially discontinuous elements in order to provide efficient preconditioners for the solution procedure. On the other hand, the preservation of the full FEM framework (as opposed to collocation into a high-order S{sub n} scheme) retains the possibility of the Galerkin interpolated connection between phase space elements at arbitrary levels of discretization. Since the basis vectors are not orthonormal, a generalization of the Riemann procedure is introduced to separate the incoming and outgoing contributions in case of unstructured meshes. However, due to the properties of the angular discretization, the Riemann procedure can be avoided at a large fraction of the faces and this fraction rapidly increases as the level of refinement increases, contributing to the computational efficiency. In this paper the properties of the discretization scheme are studied with uniform refinement using an iterative solver based on the S{sub 2} sweep order of the spatial elements. The fourth order convergence of the scalar flux is shown as anticipated from earlier schemes and the rapidly decreasing fraction of required Riemann faces is illustrated. (authors)
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.
2010-01-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications. PMID:20577570
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H; Miller, Cass T
2010-07-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.
Bu Sunyoung Huang Jingfang Boyer, Treavor H. Miller, Cass T.
2010-07-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.
NASA Astrophysics Data System (ADS)
Tibi, R.; Young, C. J.; Gonzales, A.; Ballard, S.; Encarnacao, A. V.
2016-12-01
The matched filtering technique involving the cross-correlation of a waveform of interest with archived signals from a template library has proven to be a powerful tool for detecting events in regions with repeating seismicity. However, waveform correlation is computationally expensive, and therefore impractical for large template sets unless dedicated distributed computing hardware and software are used. In this study, we introduce an Approximate Nearest Neighbor (ANN) approach that enables the use of very large template libraries for waveform correlation without requiring a complex distributed computing system. Our method begins with a projection into a reduced dimensionality space based on correlation with a randomized subset of the full template archive. Searching for a specified number of nearest neighbors is accomplished by using randomized K-dimensional trees. We used the approach to search for matches to each of 2700 analyst-reviewed signal detections reported for May 2010 for the IMS station MKAR. The template library in this case consists of a dataset of more than 200,000 analyst-reviewed signal detections for the same station from 2002-2014 (excluding May 2010). Of these signal detections, 60% are teleseismic first P, and 15% regional phases (Pn, Pg, Sn, and Lg). The analyses performed on a standard desktop computer shows that the proposed approach performs the search of the large template libraries about 20 times faster than the standard full linear search, while achieving recall rates greater than 80%, with the recall rate increasing for higher correlation values. To decide whether to confirm a match, we use a hybrid method involving a cluster approach for queries with two or more matches, and correlation score for single matches. Of the signal detections that passed our confirmation process, 52% were teleseismic first P, and 30% were regional phases.
NASA Astrophysics Data System (ADS)
Abbasbandy, S.
2007-10-01
In this article, an application of He's variational iteration method is proposed to approximate the solution of a nonlinear fractional differential equation with Riemann-Liouville's fractional derivatives. Also, the results are compared with those obtained by Adomian's decomposition method and truncated series method. The results reveal that the method is very effective and simple.
NASA Astrophysics Data System (ADS)
Imanara, Yuuki; Kawaratani, Keisuke; Samejima, Masaki; Akiyoshi, Masanori; Sasaki, Ryoichi
This paper addresses a problem to decide the combination of risk-reducing plans quickly. The combinatorial problem is formulated as one of the 0-1 integer programming, and Branch and Bound is used. However, Simplex method that is executed on Branch and Bound takes much time. Our proposed method decides the optimal combination based on approximation algorithms, greedy algorithm and single constraint selection in addition to Simplex method. Only if bounding by approximate algorithms leads to incorrect optimal solutions, Simplex method is executed to verify the bounding. As a result of evaluation experiments, the proposed method can reduce the computational time by 71% in comparison with the existing method.
A novel window based method for approximating the Hausdorff in 3D range imagery.
Koch, Mark William
2004-10-01
Matching a set of 3D points to another set of 3D points is an important part of any 3D object recognition system. The Hausdorff distance is known for it robustness in the face of obscuration, clutter, and noise. We show how to approximate the 3D Hausdorff fraction with linear time complexity and quadratic space complexity. We empirically demonstrate that the approximation is very good when compared to actual Hausdorff distances.
High-order-harmonic spectra from atoms in intense laser fields: Exact versus approximate methods
NASA Astrophysics Data System (ADS)
Pugliese, S. N.; Simonsen, A. S.; Førre, M.; Hansen, J. P.
2015-08-01
We compare harmonic spectra from hydrogen based on the numerical solution of the time-dependent Schrödinger equation and three approximate models: (i) the strong field approximation (SFA), (ii) the Coulomb-Volkov modified strong field approximation (CVA), and (iii) the strong field approximation with the stationary phase approximation applied to the momentum integrals (SPSFA). At laser intensities in the range of (1 -3 ) ×1014W/cm 2 we find good agreement when comparing the SFA and CVA with exact results. In general the CVA displays an overall better agreement with ab initio results, which reflects the role of the Coulomb field in the ionization as well as in the recombination process. Furthermore, it is found that the widely used SPSFA breaks down for low-order harmonic generation; i.e., the approximation turns out to be accurate only in the outer part of the harmonic plateau region as well as in the cutoff region. We trace this deficiency to the singularity of the SPSFA associated with short trajectories, i.e., short return times. When removing these, we obtain a version of the SPSFA which works rather well for the entire harmonic spectrum.
Tibi, Rigobert; Young, Christopher; Gonzales, Antonio; ...
2017-07-04
The matched filtering technique that uses the cross correlation of a waveform of interest with archived signals from a template library has proven to be a powerful tool for detecting events in regions with repeating seismicity. However, waveform correlation is computationally expensive and therefore impractical for large template sets unless dedicated distributed computing hardware and software are used. In this paper, we introduce an approximate nearest neighbor (ANN) approach that enables the use of very large template libraries for waveform correlation. Our method begins with a projection into a reduced dimensionality space, based on correlation with a randomized subset ofmore » the full template archive. Searching for a specified number of nearest neighbors for a query waveform is accomplished by iteratively comparing it with the neighbors of its immediate neighbors. We used the approach to search for matches to each of ~2300 analyst-reviewed signal detections reported in May 2010 for the International Monitoring System station MKAR. The template library in this case consists of a data set of more than 200,000 analyst-reviewed signal detections for the same station from February 2002 to July 2016 (excluding May 2010). Of these signal detections, 73% are teleseismic first P and 17% regional phases (Pn, Pg, Sn, and Lg). Finally, the analyses performed on a standard desktop computer show that the proposed ANN approach performs a search of the large template libraries about 25 times faster than the standard full linear search and achieves recall rates greater than 80%, with the recall rate increasing for higher correlation thresholds.« less
Stochastic approximation methods for fusion-rule estimation in multiple sensor systems
Rao, N.S.V.
1994-06-01
A system of N sensors S{sub 1}, S{sub 2},{hor_ellipsis},S{sub N} is considered; corresponding to an object with parameter x {element_of} {Re}{sup d}, sensor S{sub i} yields output y{sup (i)}{element_of}{Re}{sup d} according to an unknown probability distribution p{sub i}(y{sup (i)}{vert_bar}x). A training l-sample (x{sub 1}, y{sub 1}), (x{sub 2}, y{sub 2}),{hor_ellipsis},(x{sub l}, y{sub l}) is given where y{sub i} = (y{sub i}({sup 1}), y{sub i}({sup 2}),{hor_ellipsis},y{sub i}({sup N}) and y{sub i}({sup j}) is the output of S{sub j} in response to input X{sub i}. The problem is to estimate a fusion rule f : {Re}{sup Nd} {yields} {Re}{sup d}, based on the sample, such that the expected square error I(f) = {integral}[x {minus} f(y{sup 1}, y{sup 2},{hor_ellipsis},y{sup N})]{sup 2} p(y{sup 1}, y{sup 2},{hor_ellipsis},y{sup N}){vert_bar}x)p(x)dy{sup 1}dy{sup 2} {hor_ellipsis} dy{sup N}dx is to be minimized over a family of fusion rules {lambda} based on the given l-sample. Let f{sub *} {element_of} {lambda} minimize I(f); f{sub *} cannot be computed since the underlying probability distributions are unknown. Three stochastic approximation methods are presented to compute {cflx f}, such that under suitable conditions, for sufficiently large sample, P[I{cflx f} {minus} I(f{sub *}) > {epsilon}] < {delta} for arbitrarily specified {epsilon} > 0 and {delta}, 0 < {delta} < 1. The three methods are based on Robbins-Monro style algorithms, empirical risk minimization, and regression estimation algorithms.
Optimal approximation method to characterize the resource trade-off functions for media servers
NASA Astrophysics Data System (ADS)
Chang, Ray-I.
1999-08-01
We have proposed an algorithm to smooth the transmission of the pre-recorded VBR media stream. It takes O(n) time complexity, where n is large, this algorithm is not suitable for online resource management and admission control in media servers. To resolve this drawback, we have explored the optimal tradeoff among resources by an O(nlogn) algorithm. Based on the pre-computed resource tradeoff function, the resource management and admission control procedure is as simple as table hashing. However, this approach requires O(n) space to store and maintain the resource tradeoff function. In this paper, while giving some extra resources, a linear-time algorithm is proposed to approximate the resource tradeoff function by piecewise line segments. We can prove that the number of line segments in the obtained approximation function is minimized for the given extra resources. The proposed algorithm has been applied to approximate the bandwidth-buffer-tradeoff function of the real-world Star War movie. While an extra 0.1 Mbps bandwidth is given, the storage space required for the approximation function is over 2000 times smaller than that required for the original function. While an extra 10 KB buffer is given, the storage space for the approximation function is over 2200 over times smaller than that required for the original function. The proposed algorithm is really useful for resource management and admission control in real-world media servers.
NASA Astrophysics Data System (ADS)
Lin, Xue-lei; Lu, Xin; Ng, Micheal K.; Sun, Hai-Wei
2016-10-01
A fast accurate approximation method with multigrid solver is proposed to solve a two-dimensional fractional sub-diffusion equation. Using the finite difference discretization of fractional time derivative, a block lower triangular Toeplitz matrix is obtained where each main diagonal block contains a two-dimensional matrix for the Laplacian operator. Our idea is to make use of the block ɛ-circulant approximation via fast Fourier transforms, so that the resulting task is to solve a block diagonal system, where each diagonal block matrix is the sum of a complex scalar times the identity matrix and a Laplacian matrix. We show that the accuracy of the approximation scheme is of O (ɛ). Because of the special diagonal block structure, we employ the multigrid method to solve the resulting linear systems. The convergence of the multigrid method is studied. Numerical examples are presented to illustrate the accuracy of the proposed approximation scheme and the efficiency of the proposed solver.
NASA Astrophysics Data System (ADS)
Kováč, Michal
2015-03-01
Thin-walled centrically compressed members with non-symmetrical or mono-symmetrical cross-sections can buckle in a torsional-flexural buckling mode. Vlasov developed a system of governing differential equations of the stability of such member cases. Solving these coupled equations in an analytic way is only possible in simple cases. Therefore, Goľdenvejzer introduced an approximate method for the solution of this system to calculate the critical axial force of torsional-flexural buckling. Moreover, this can also be used in cases of members with various boundary conditions in bending and torsion. This approximate method for the calculation of critical force has been adopted into norms. Nowadays, we can also solve governing differential equations by numerical methods, such as the finite element method (FEM). Therefore, in this paper, the results of the approximate method and the FEM were compared to each other, while considering the FEM as a reference method. This comparison shows any discrepancies of the approximate method. Attention was also paid to when and why discrepancies occur. The approximate method can be used in practice by considering some simplifications, which ensure safe results.
A Novel Method of the Generalized Interval-Valued Fuzzy Rough Approximation Operators
Xue, Tianyu; Xue, Zhan'ao; Cheng, Huiru; Liu, Jie; Zhu, Tailong
2014-01-01
Rough set theory is a suitable tool for dealing with the imprecision, uncertainty, incompleteness, and vagueness of knowledge. In this paper, new lower and upper approximation operators for generalized fuzzy rough sets are constructed, and their definitions are expanded to the interval-valued environment. Furthermore, the properties of this type of rough sets are analyzed. These operators are shown to be equivalent to the generalized interval fuzzy rough approximation operators introduced by Dubois, which are determined by any interval-valued fuzzy binary relation expressed in a generalized approximation space. Main properties of these operators are discussed under different interval-valued fuzzy binary relations, and the illustrative examples are given to demonstrate the main features of the proposed operators. PMID:25162065
Approximation methods for inverse problems involving the vibration of beams with tip bodies
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1984-01-01
Two cubic spline based approximation schemes for the estimation of structural parameters associated with the transverse vibration of flexible beams with tip appendages are outlined. The identification problem is formulated as a least squares fit to data subject to the system dynamics which are given by a hybrid system of coupled ordinary and partial differential equations. The first approximation scheme is based upon an abstract semigroup formulation of the state equation while a weak/variational form is the basis for the second. Cubic spline based subspaces together with a Rayleigh-Ritz-Galerkin approach were used to construct sequences of easily solved finite dimensional approximating identification problems. Convergence results are briefly discussed and a numerical example demonstrating the feasibility of the schemes and exhibiting their relative performance for purposes of comparison is provided.
The Investigation of Optimal Discrete Approximations for Real Time Flight Simulations
NASA Technical Reports Server (NTRS)
Parrish, E. A.; Mcvey, E. S.; Cook, G.; Henderson, K. C.
1976-01-01
The results are presented of an investigation of discrete approximations for real time flight simulation. Major topics discussed include: (1) consideration of the particular problem of approximation of continuous autopilots by digital autopilots; (2) use of Bode plots and synthesis of transfer functions by asymptotic fits in a warped frequency domain; (3) an investigation of the various substitution formulas, including the effects of nonlinearities; (4) use of pade approximation to the solution of the matrix exponential arising from the discrete state equations; and (5) an analytical integration of the state equation using interpolated input.
Strömberg, Eric A; Nyberg, Joakim; Hooker, Andrew C
2016-12-01
With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.
Saitoh, T.S.; Hoshi, Akira
1999-07-01
numerical methods (e.g. Saitoh and Kato, 1994). In addition, close-contact melting heat transfer characteristics including melt flow in the liquid film under inner wall temperature distribution were analyzed and simple approximate equations were already presented by Saitoh and Hoshi (1997). In this paper, the authors will propose an analytical solution on combined close-contact and natural convection melting in horizontal cylindrical and spherical capsules, which is useful for the practical capsule bed LHTES system.
Closure to new results for an approximate method for calculating two-dimensional furrow infiltration
USDA-ARS?s Scientific Manuscript database
In a discussion paper, Ebrahimian and Noury (2015) raised several concerns about an approximate solution to the two-dimensional Richards equation presented by Bautista et al (2014). The solution is based on a procedure originally proposed by Warrick et al. (2007). Such a solution is of practical i...
New results for an approximate method for calculating two-dimensional furrow infiltration
USDA-ARS?s Scientific Manuscript database
Warrick et al. (2007) proposed an approximate solution following the two-dimensional Richards equation, which can be used to estimate furrow infiltration based on soil physical properties. The equation computes infiltration as the sum of one-dimensional infiltration and a term labeled the edge effe...
ERIC Educational Resources Information Center
von Davier, Matthias; Sinharay, Sandip
2009-01-01
This paper presents an application of a stochastic approximation EM-algorithm using a Metropolis-Hastings sampler to estimate the parameters of an item response latent regression model. Latent regression models are extensions of item response theory (IRT) to a 2-level latent variable model in which covariates serve as predictors of the…
Existence and uniqueness results for neural network approximations.
Williamson, R C; Helmke, U
1995-01-01
Some approximation theoretic questions concerning a certain class of neural networks are considered. The networks considered are single input, single output, single hidden layer, feedforward neural networks with continuous sigmoidal activation functions, no input weights but with hidden layer thresholds and output layer weights. Specifically, questions of existence and uniqueness of best approximations on a closed interval of the real line under mean-square and uniform approximation error measures are studied. A by-product of this study is a reparametrization of the class of networks considered in terms of rational functions of a single variable. This rational reparametrization is used to apply the theory of Pade approximation to the class of networks considered. In addition, a question related to the number of local minima arising in gradient algorithms for learning is examined.
NASA Astrophysics Data System (ADS)
Erhard, Jannis; Bleiziffer, Patrick; Görling, Andreas
2016-09-01
A power series approximation for the correlation kernel of time-dependent density-functional theory is presented. Using this approximation in the adiabatic-connection fluctuation-dissipation (ACFD) theorem leads to a new family of Kohn-Sham methods. The new methods yield reaction energies and barriers of unprecedented accuracy and enable a treatment of static (strong) correlation with an accuracy of high-level multireference configuration interaction methods but are single-reference methods allowing for a black-box-like handling of static correlation. The new methods exhibit a better scaling of the computational effort with the system size than rivaling wave-function-based electronic structure methods. Moreover, the new methods do not suffer from the problem of singularities in response functions plaguing previous ACFD methods and therefore are applicable to any type of electronic system.
A Diffusion Approximation and Numerical Methods for Adaptive Neuron Models with Stochastic Inputs
Rosenbaum, Robert
2016-01-01
Characterizing the spiking statistics of neurons receiving noisy synaptic input is a central problem in computational neuroscience. Monte Carlo approaches to this problem are computationally expensive and often fail to provide mechanistic insight. Thus, the field has seen the development of mathematical and numerical approaches, often relying on a Fokker-Planck formalism. These approaches force a compromise between biological realism, accuracy and computational efficiency. In this article we develop an extension of existing diffusion approximations to more accurately approximate the response of neurons with adaptation currents and noisy synaptic currents. The implementation refines existing numerical schemes for solving the associated Fokker-Planck equations to improve computationally efficiency and accuracy. Computer code implementing the developed algorithms is made available to the public. PMID:27148036
A Diffusion Approximation and Numerical Methods for Adaptive Neuron Models with Stochastic Inputs.
Rosenbaum, Robert
2016-01-01
Characterizing the spiking statistics of neurons receiving noisy synaptic input is a central problem in computational neuroscience. Monte Carlo approaches to this problem are computationally expensive and often fail to provide mechanistic insight. Thus, the field has seen the development of mathematical and numerical approaches, often relying on a Fokker-Planck formalism. These approaches force a compromise between biological realism, accuracy and computational efficiency. In this article we develop an extension of existing diffusion approximations to more accurately approximate the response of neurons with adaptation currents and noisy synaptic currents. The implementation refines existing numerical schemes for solving the associated Fokker-Planck equations to improve computationally efficiency and accuracy. Computer code implementing the developed algorithms is made available to the public.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Smith, Ralph C.; Wang, Yun
1994-01-01
Based on a distributed parameter model for vibrations, an approximate finite dimensional dynamic compensator is designed to suppress vibrations (multiple modes with a broad band of frequencies) of a circular plate with Kelvin-Voigt damping and clamped boundary conditions. The control is realized via piezoceramic patches bonded to the plate and is calculated from information available from several pointwise observed state variables. Examples from computational studies as well as use in laboratory experiments are presented to demonstrate the effectiveness of this design.
Rational approximations, software and test methods for sine and cosine integrals
NASA Astrophysics Data System (ADS)
MacLeod, Allan
1996-09-01
Rational approximations to the sine integralSi(x) and cosine-integralCi(x) are developed which give an accuracy of 20sf. The robust construction of software for these functions is discussed, together with a test procedure for assessing the performance of such codes. Use of the tests discovers a major error in the netlib library FN codes forSi. Fortran versions of the codes and tests are available by electronic mail.
NASA Astrophysics Data System (ADS)
Kuzenov, V. V.; Ryzhkov, S. V.
2017-02-01
The paper formulated engineering and physical mathematical model for aerothermodynamics hypersonic flight vehicle (HFV) in laminar and turbulent boundary layers (model designed for an approximate estimate of the convective heat flow in the range of speeds M = 6-28, and height H = 20-80 km). 2D versions of calculations of convective heat flows for bodies of simple geometric forms (individual elements of the design HFV) are presented.
Extended proton-neutron quasiparticle random-phase approximation in a boson expansion method
NASA Astrophysics Data System (ADS)
Civitarese, O.; Montani, F.; Reboiro, M.
1999-08-01
The proton-neutron quasiparticle random phase approximation (pn-QRPA) is extended to include next to leading order terms of the QRPA harmonic expansion. The procedure is tested for the case of a separable Hamiltonian in the SO(5) symmetry representation. The pn-QRPA equation of motion is solved by using a boson expansion technique adapted to the treatment of proton-neutron correlations. The resulting wave functions are used to calculate the matrix elements of double-Fermi transitions.
NASA Astrophysics Data System (ADS)
Vasil'Ev, I. A.; Treibsho, E. I.; Korkhov, A. D.; Petrov, V. M.; Orlova, N. G.; Balakina, M. M.
1981-06-01
The article describes the experimental method and presents results of the investigation of the heat capacity of liquid n-alcohols and esters. It examines the method of group approximation of the temperature dependence on the example of n-alkanes and n-alkenes.
Nakajima, Nobuharu
2013-03-01
Previously, we have proposed a lensless coherent imaging using a nonholographic and noniterative phase-retrieval method that allows the reconstruction of a complex-valued object from a single diffraction intensity measured with an aperture-array filter. The proof-of-concept experiment of this method has been demonstrated under the Fresnel diffraction approximation. In applications to microscopy, however, the measurement of the diffraction intensity with high numerical aperture beyond the Fresnel approximation is required to obtain the object information at high spatial resolution. Thus we have also presented an extension procedure to apply the method to the cases beyond the Fresnel approximation by means of computer simulations. Here the effectiveness of the procedure is demonstrated by the experiments, in which the reconstruction with about 10 times the resolution of our previous experiment has been achieved and the object information in depth direction has been retrieved.
NASA Astrophysics Data System (ADS)
Farsakoglu, O. F.; Inal Atik, Ipek; Kocabas, Hikmet
2014-07-01
The effect of Coddington factors on aberration functions has been analysed using thin lens approximation with optical glass parameters. The dependence of spherical aberration on Coddington shape factor for the various optical glasses in real lens design was discussed using exact ray tracing for the optics education and training purposes. Thin lens approximation and thick lens design are generally taught with only lecturing method. But, thick lens design is closely related to the real life. Hence, it is more appropriate to teach thin lens approximation and thick lens design with real-life context based approach. Context based teaching can be effective in solving problems in which the subject is very difficult and irrelevant. It also provides extensive evidence for optics education that students are generally unable to correctly apply the concepts of lens design to optical instruments currently used. Therefore, the outline of real-life context based thick lens design lessons were proposed and explained in detail considering thin lens approximation.
Liang, Xiao; Khaliq, Abdul Q. M.; Xing, Yulong
2015-01-23
In this paper, we study a local discontinuous Galerkin method combined with fourth order exponential time differencing Runge-Kutta time discretization and a fourth order conservative method for solving the nonlinear Schrödinger equations. Based on different choices of numerical fluxes, we propose both energy-conserving and energy-dissipative local discontinuous Galerkin methods, and have proven the error estimates for the semi-discrete methods applied to linear Schrödinger equation. The numerical methods are proven to be highly efficient and stable for long-range soliton computations. Finally, extensive numerical examples are provided to illustrate the accuracy, efficiency and reliability of the proposed methods.
NASA Astrophysics Data System (ADS)
Bisetti, Fabrizio
2012-06-01
Recent trends in hydrocarbon fuel research indicate that the number of species and reactions in chemical kinetic mechanisms is rapidly increasing in an effort to provide predictive capabilities for fuels of practical interest. In order to cope with the computational cost associated with the time integration of stiff, large chemical systems, a novel approach is proposed. The approach combines an exponential integrator and Krylov subspace approximations to the exponential function of the Jacobian matrix. The components of the approach are described in detail and applied to the ignition of stoichiometric methane-air and iso-octane-air mixtures, here described by two widely adopted chemical kinetic mechanisms. The approach is found to be robust even at relatively large time steps and the global error displays a nominal third-order convergence. The performance of the approach is improved by utilising an adaptive algorithm for the selection of the Krylov subspace size, which guarantees an approximation to the matrix exponential within user-defined error tolerance. The Krylov projection of the Jacobian matrix onto a low-dimensional space is interpreted as a local model reduction with a well-defined error control strategy. Finally, the performance of the approach is discussed with regard to the optimal selection of the parameters governing the accuracy of its individual components.
An approximate reasoning-based method for screening high-level-waste tanks for flammable gas
Eisenhawer, S.W.; Bott, T.F.; Smith, R.E.
2000-06-01
The in situ retention of flammable gas produced by radiolysis and thermal decomposition in high-level waste can pose a safety problem if the gases are released episodically into the dome space of a storage tank. Screening efforts at the Hanford site have been directed at identifying tanks in which this situation could exist. Problems encountered in screening motivated an effort to develop and improved screening methodology. Approximate reasoning (AR) is a formalism designed to emulate the kinds of complex judgments made by subject matter experts. It uses inductive logic structures to build a sequence of forward-chaining inferences about a subject. Approximate-reasoning models incorporate natural language expressions known as linguistic variables to represent evidence. The use of fuzzy sets to represent these variables mathematically makes it practical to evaluate quantitative and qualitative information consistently. In a pilot study to investigate the utility of AR for flammable gas screening, the effort to implement such a model was found to be acceptable, and computational requirements were found to be reasonable. The preliminary results showed that important judgments about the validity of observational data and the predictive power of models could be made. These results give new insights into the problems observed in previous screening efforts.
A simple approximate method for obtaining spanwise lift distributions over swept wings
NASA Technical Reports Server (NTRS)
Diederich, Franklin W
1948-01-01
It is shown how Schrenk's empirical method of estimating the lift distribution over straight wings can be adapted to swept wings by replacing the elliptical distribution by a new "ideal" distribution which varies with sweep.The application of the method is discussed in detail and several comparisons are made to show the agreement of the proposed method with more rigorous ones. It is shown how first-order compressibility corrections applicable to subcritical speeds may be included in this method.
A Method to Approximate and Statistically Model the Shape of Triggered Landslides
NASA Astrophysics Data System (ADS)
Taylor, F. E.; Malamud, B. D.
2014-12-01
The planimetric shape of an individual landslide area is controlled by factors such as terrain morphology, material involved and speed, with landslide shapes varying in total area (AL), type of shape, and their length-to-width (L/W) ratios. Here, we abstract landslide shapes to ellipses, and examine how the corresponding L/W ratios vary as a function of AL in two substantially complete triggered landslide inventories: (i) 11,111 landslides triggered by the 1994 (M = 6.7) Northridge Earthquake, USA (ii) 9,594 landslides triggered by heavy rain during the 1998 Hurricane Mitch in Guatemala. For each landslide, an ellipse with equivalent area (AL) and perimeter (PL) of the original shape was created and a non-dimensional value of the ratio of the ellipse length-to-width (L/W) then calculated. Using Maximum Likelihood Estimation, the statistical distribution of landslide L/W ratio values were then considered for ten landslide area (AL in m2) categories: 0-99, 100-199, 200-399, 400-799, 800-1599, 1600-3199, 3200-6399, 6400-12,799, 12,800-25,600, and ≥25,600 m2. We find that for each of the landslide area categories considered separately, the probability density function p(L/W) as a function of (L/W) approximately follows a three-parameter inverse gamma distribution, which has a power-law decay for medium and large L/W values and exponential rollover for small L/W values. The 'rollover' value where p(L/W) is at its maximum, tends to increase with increasing AL category, from approximately L/W = 1.7 for landslides in the smallest AL category (0 < AL < 99 m2), to L/W = 7.5 for landslides in the largest AL category (AL ≥25,600 m2). Broadly, this suggests that as AL increases, L/W increases, i.e. as landslide areas increase, the probability of observing a more elongated shape increases. There is generally good agreement between the two inventories' statistical distributions in spite of differences in location, triggering mechanism and geology. This work will aid in
Evidence of iridescence in TiO2 nanostructures: An approximation in plane wave expansion method
NASA Astrophysics Data System (ADS)
Quiroz, Heiddy P.; Barrera-Patiño, C. P.; Rey-González, R. R.; Dussan, A.
2016-11-01
Titanium dioxide nanotubes, TiO2 NTs, can be obtained by electrochemical anodization of Titanium sheets. After nanotubes are removed by mechanical stress, residual structures or traces on the surface of titanium sheets can be observed. These traces show iridescent effects. In this paper we carry out both experimental and theoretical study of those interesting and novel optical properties. For the experimental analysis we use angle resolved UV-vis spectroscopy while in the theoretical study is evaluated the photonic spectra using numerical simulations into the frequency-domain and the framework of the wave plane approximation. The iridescent effect is a strong property and independent of the sample. This behavior can be important to design new materials or compounds for several applications such as, cosmetic industry, optoelectronic devices, photocatalysis, sensors, among others.
NASA Technical Reports Server (NTRS)
Bailey, Harry E.; Beam, Richard M.
1991-01-01
Finite-difference approximations for steady-state compressible Navier-Stokes equations, whose two spatial dimensions are written in generalized curvilinear coordinates and strong conservation-law form, are presently solved by means of Newton's method in order to obtain a lifting-airfoil flow field under subsonic and transonnic conditions. In addition to ascertaining the computational requirements of an initial guess ensuring convergence and the degree of computational efficiency obtainable via the approximate Newton method's freezing of the Jacobian matrices, attention is given to the need for auxiliary methods assessing the temporal stability of steady-state solutions. It is demonstrated that nonunique solutions of the finite-difference equations are obtainable by Newton's method in conjunction with a continuation method.
Integral approximants for functions of higher monodromic dimension
Baker, G.A. Jr.
1987-01-01
In addition to the description of multiform, locally analytic functions as covering a many sheeted version of the complex plane, Riemann also introduced the notion of considering them as describing a space whose ''monodromic'' dimension is the number of linearly independent coverings by the monogenic analytic function at each point of the complex plane. I suggest that this latter concept is natural for integral approximants (sub-class of Hermite-Pade approximants) and discuss results for both ''horizontal'' and ''diagonal'' sequences of approximants. Some theorems are now available in both cases and make clear the natural domain of convergence of the horizontal sequences is a disk centered on the origin and that of the diagonal sequences is a suitably cut complex-plane together with its identically cut pendant Riemann sheets. Some numerical examples have also been computed.
Kim, S.
1994-12-31
Parallel iterative procedures based on domain decomposition techniques are defined and analyzed for the numerical solution of wave propagation by finite element and finite difference methods. For finite element methods, in a Lagrangian framework, an efficient way for choosing the algorithm parameter as well as the algorithm convergence are indicated. Some heuristic arguments for finding the algorithm parameter for finite difference schemes are addressed. Numerical results are presented to indicate the effectiveness of the methods.
NASA Astrophysics Data System (ADS)
Rosolen, A.; Peco, C.; Arroyo, M.
2013-09-01
We present an adaptive meshfree method to approximate phase-field models of biomembranes. In such models, the Helfrich curvature elastic energy, the surface area, and the enclosed volume of a vesicle are written as functionals of a continuous phase-field, which describes the interface in a smeared manner. Such functionals involve up to second-order spatial derivatives of the phase-field, leading to fourth-order Euler-Lagrange partial differential equations (PDE). The solutions develop sharp internal layers in the vicinity of the putative interface, and are nearly constant elsewhere. Thanks to the smoothness of the local maximum-entropy (max-ent) meshfree basis functions, we approximate numerically this high-order phase-field model with a direct Ritz-Galerkin method. The flexibility of the meshfree method allows us to easily adapt the grid to resolve the sharp features of the solutions. Thus, the proposed approach is more efficient than common tensor product methods (e.g. finite differences or spectral methods), and simpler than unstructured C0 finite element methods, applicable by reformulating the model as a system of second-order PDE. The proposed method, implemented here under the assumption of axisymmetry, allows us to show numerical evidence of convergence of the phase-field solutions to the sharp interface limit as the regularization parameter approaches zero. In a companion paper, we present a Lagrangian method based on the approximants analyzed here to study the dynamics of vesicles embedded in a viscous fluid.
NASA Astrophysics Data System (ADS)
Varela, Alberto J.; Calvo, Maria L.
1995-04-01
We present a comparative study between two experimental methods to determine the modulation transfer function (MTF) of a hololens system. The two hololenses were previously recorded and tested for filtering pseudocolor. In the first method we used the classical Foucault test. The second, alternative method is based on the digital image processing of a perfect edge under incoherent illumination. From the digitized intensity line profiles we obtain the MTF and cutoff frequency of the optical system according to the reciprocity between line spread function and MTF. Comments are made on the applicability and accuracy of these two methods.
Approximate method for calculating transonic flow about lifting wing-body configurations
NASA Technical Reports Server (NTRS)
Barnwell, R. W.
1976-01-01
The three-dimensional problem of transonic flow about lifting wing-body configurations is reduced to a two-variable computational problem with the method of matched asymptotic expansions. The computational problem is solved with the method of relaxation. The method accounts for leading-edge separation, the presence of shock waves, and the presence of solid, slotted, or porous tunnel walls. The Mach number range of the method extends from zero to the supersonic value at which the wing leading edge becomes sonic. A modified form of the transonic area rule which accounts for the effect of lift is developed. This effect is explained from simple physical considerations.
Freeze, G.A.; Larson, K.W.; Davies, P.B.
1995-10-01
Eight alternative methods for approximating salt creep and disposal room closure in a multiphase flow model of the Waste Isolation Pilot Plant (WIPP) were implemented and evaluated: Three fixed-room geometries three porosity functions and two fluid-phase-salt methods. The pressure-time-porosity line interpolation method is the method used in current WIPP Performance Assessment calculations. The room closure approximation methods were calibrated against a series of room closure simulations performed using a creep closure code, SANCHO. The fixed-room geometries did not incorporate a direct coupling between room void volume and room pressure. The two porosity function methods that utilized moles of gas as an independent parameter for closure coupling. The capillary backstress method was unable to accurately simulate conditions of re-closure of the room. Two methods were found to be accurate enough to approximate the effects of room closure; the boundary backstress method and pressure-time-porosity line interpolation. The boundary backstress method is a more reliable indicator of system behavior due to a theoretical basis for modeling salt deformation as a viscous process. It is a complex method and a detailed calibration process is required. The pressure lines method is thought to be less reliable because the results were skewed towards SANCHO results in simulations where the sequence of gas generation was significantly different from the SANCHO gas-generation rate histories used for closure calibration. This limitation in the pressure lines method is most pronounced at higher gas-generation rates and is relatively insignificant at lower gas-generation rates. Due to its relative simplicity, the pressure lines method is easier to implement in multiphase flow codes and simulations have a shorter execution time.
A 3D finite element ALE method using an approximate Riemann solution
Chiravalle, V. P.; Morgan, N. R.
2016-08-09
Arbitrary Lagrangian–Eulerian finite volume methods that solve a multidimensional Riemann-like problem at the cell center in a staggered grid hydrodynamic (SGH) arrangement have been proposed. This research proposes a new 3D finite element arbitrary Lagrangian–Eulerian SGH method that incorporates a multidimensional Riemann-like problem. Here, two different Riemann jump relations are investigated. A new limiting method that greatly improves the accuracy of the SGH method on isentropic flows is investigated. A remap method that improves upon a well-known mesh relaxation and remapping technique in order to ensure total energy conservation during the remap is also presented. Numerical details and test problemmore » results are presented.« less
A 3D finite element ALE method using an approximate Riemann solution
Chiravalle, V. P.; Morgan, N. R.
2016-08-09
Arbitrary Lagrangian–Eulerian finite volume methods that solve a multidimensional Riemann-like problem at the cell center in a staggered grid hydrodynamic (SGH) arrangement have been proposed. This research proposes a new 3D finite element arbitrary Lagrangian–Eulerian SGH method that incorporates a multidimensional Riemann-like problem. Here, two different Riemann jump relations are investigated. A new limiting method that greatly improves the accuracy of the SGH method on isentropic flows is investigated. A remap method that improves upon a well-known mesh relaxation and remapping technique in order to ensure total energy conservation during the remap is also presented. Numerical details and test problem results are presented.
A 3D finite element ALE method using an approximate Riemann solution
Chiravalle, V. P.; Morgan, N. R.
2016-08-09
Arbitrary Lagrangian–Eulerian finite volume methods that solve a multidimensional Riemann-like problem at the cell center in a staggered grid hydrodynamic (SGH) arrangement have been proposed. This research proposes a new 3D finite element arbitrary Lagrangian–Eulerian SGH method that incorporates a multidimensional Riemann-like problem. Here, two different Riemann jump relations are investigated. A new limiting method that greatly improves the accuracy of the SGH method on isentropic flows is investigated. A remap method that improves upon a well-known mesh relaxation and remapping technique in order to ensure total energy conservation during the remap is also presented. Numerical details and test problem results are presented.
NASA Astrophysics Data System (ADS)
Dutt, Ranabir; Mukherji, Uma
1982-08-01
We propose a new approximation scheme to obtain analytic expressions for the bond-state energies and eigenfunctions for any arbitrary bound nl-state of the Hulthén potential. The predicted energies Enl are in excellent agreement with the perturbative results of Lai and Lin. The scope for an extension of the method to the continuum states is also discussed.
Accurate finite difference methods for time-harmonic wave propagation
NASA Technical Reports Server (NTRS)
Harari, Isaac; Turkel, Eli
1994-01-01
Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.
An approximate-reasoning-based method for screening high-level waste tanks for flammable gas
Eisenhawer, S.W.; Bott, T.F.; Smith, R.E.
1998-07-01
The in situ retention of flammable gas produced by radiolysis and thermal decomposition in high-level waste can pose a safety problem if the gases are released episodically into the dome space of a storage tank. Screening efforts at Hanford have been directed at identifying tanks in which this situation could exist. Problems encountered in screening motivated an effort to develop an improved screening methodology. Approximate reasoning (AR) is a formalism designed to emulate the kinds of complex judgments made by subject matter experts. It uses inductive logic structures to build a sequence of forward-chaining inferences about a subject. AR models incorporate natural language expressions known as linguistic variables to represent evidence. The use of fuzzy sets to represent these variables mathematically makes it practical to evaluate quantitative and qualitative information consistently. The authors performed a pilot study to investigate the utility of AR for flammable gas screening. They found that the effort to implement such a model was acceptable and that computational requirements were reasonable. The preliminary results showed that important judgments about the validity of observational data and the predictive power of models could be made. These results give new insights into the problems observed in previous screening efforts.
High order filtering methods for approximating hyberbolic systems of conservation laws
NASA Technical Reports Server (NTRS)
Lafon, F.; Osher, S.
1990-01-01
In the computation of discontinuous solutions of hyperbolic systems of conservation laws, the recently developed essentially non-oscillatory (ENO) schemes appear to be very useful. However, they are computationally costly compared to simple central difference methods. A filtering method which is developed uses simple central differencing of arbitrarily high order accuracy, except when a novel local test indicates the development of spurious oscillations. At these points, the full ENO apparatus is used, maintaining the high order of accuracy, but removing spurious oscillations. Numerical results indicate the success of the method. High order of accuracy was obtained in regions of smooth flow without spurious oscillations for a wide range of problems and a significant speed up of generally a factor of almost three over the full ENO method.
NASA Astrophysics Data System (ADS)
Shigeta, Yasuteru; Nagao, Hidemi; Nishikawa, Kiyoshi; Yamaguchi, Kizashi
1999-10-01
We have proposed a new numerical scheme for the non-Born-Oppenheimer density functional calculation based upon the Green function techniques within the GW approximation for evaluating molecular properties in the full quantum mechanical treatment. We numerically calculate the physical properties of the individual motion in a hydrogen molecule and a muon molecule by means of this method and discuss the isotope effect on the properties in relation to correlation effects. It is concluded that the GW approximation is work well not only for calculation of the electronic state but also for that of nuclear state.
An approximate-reasoning-based method for screening flammable gas tanks
Eisenhawer, S.W.; Bott, T.F.; Smith, R.E.
1998-03-01
High-level waste (HLW) produces flammable gases as a result of radiolysis and thermal decomposition of organics. Under certain conditions, these gases can accumulate within the waste for extended periods and then be released quickly into the dome space of the storage tank. As part of the effort to reduce the safety concerns associated with flammable gas in HLW tanks at Hanford, a flammable gas watch list (FGWL) has been established. Inclusion on the FGWL is based on criteria intended to measure the risk associated with the presence of flammable gas. It is important that all high-risk tanks be identified with high confidence so that they may be controlled. Conversely, to minimize operational complexity, the number of tanks on the watchlist should be reduced as near to the true number of flammable risk tanks as the current state of knowledge will support. This report presents an alternative to existing approaches for FGWL screening based on the theory of approximate reasoning (AR) (Zadeh 1976). The AR-based model emulates the inference process used by an expert when asked to make an evaluation. The FGWL model described here was exercised by performing two evaluations. (1) A complete tank evaluation where the entire algorithm is used. This was done for two tanks, U-106 and AW-104. U-106 is a single shell tank with large sludge and saltcake layers. AW-104 is a double shell tank with over one million gallons of supernate. Both of these tanks had failed the screening performed by Hodgson et al. (2) Partial evaluations using a submodule for the predictor likelihood for all of the tanks on the FGWL that had been flagged previously by Whitney (1995).
High order filtering methods for approximating hyperbolic systems of conservation laws
NASA Technical Reports Server (NTRS)
Lafon, F.; Osher, S.
1991-01-01
The essentially nonoscillatory (ENO) schemes, while potentially useful in the computation of discontinuous solutions of hyperbolic conservation-law systems, are computationally costly relative to simple central-difference methods. A filtering technique is presented which employs central differencing of arbitrarily high-order accuracy except where a local test detects the presence of spurious oscillations and calls upon the full ENO apparatus to remove them. A factor-of-three speedup is thus obtained over the full-ENO method for a wide range of problems, with high-order accuracy in regions of smooth flow.
Tang, Yiping
2005-11-22
The recently proposed first-order mean-spherical approximation (FMSA) [Y. Tang, J. Chem. Phys. 121, 10605 (2004)] for inhomogeneous fluids is extended to the study of interfacial phenomena. Computation is performed for the Lennard-Jones fluid, in which all phase equilibria properties and direct correlation function for density-functional theory are developed consistently and systematically from FMSA. Three functional methods, including fundamental measure theory for the repulsive force, local-density approximation, and square-gradient approximation, are applied in this interfacial investigation. Comparisons with the latest computer simulation data indicate that FMSA is satisfactory in predicting surface tension, density profile, as well as relevant phase equilibria. Furthermore, this work strongly suggests that FMSA is very capable of unifying homogeneous and inhomogeneous fluids, as well as those behaviors outside and inside the critical region within one framework.
Orzada, Stephan; Ladd, Mark E; Bitz, Andreas K
2017-08-01
To calculate local specific absorption rate (SAR) correctly, both the amplitude and phase of the signal in each transmit channel have to be known. In this work, we propose a method to derive a conservative upper bound for the local SAR, with a reasonable safety margin without knowledge of the transmit phases of the channels. The proposed method uses virtual observation points (VOPs). Correction factors are calculated for each set of VOPs that prevent underestimation of local SAR when the VOPs are applied with the correct amplitudes but fixed phases. The proposed method proved to be superior to the worst-case calculation based on the maximum eigenvalue of the VOPs. The mean overestimation for six coil setups could be reduced, whereas no underestimation of the maximum local SAR occurred. In the best investigated case, the overestimation could be reduced from a factor of 3.3 to a factor of 1.7. The upper bound for the local SAR calculated with the proposed method allows a fast estimation of the local SAR based on power measurements in the transmit channels and facilitates SAR monitoring in systems that do not have the capability to monitor transmit phases. Magn Reson Med 78:805-811, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Szalay, Viktor
1999-11-01
The reconstruction of a function from knowing only its values on a finite set of grid points, that is the construction of an analytical approximation reproducing the function with good accuracy everywhere within the sampled volume, is an important problem in all branches of sciences. One such problem in chemical physics is the determination of an analytical representation of Born-Oppenheimer potential energy surfaces by ab initio calculations which give the value of the potential at a finite set of grid points in configuration space. This article describes the rudiments of iterative and direct methods of potential surface reconstruction. The major new results are the derivation, numerical demonstration, and interpretation of a reconstruction formula. The reconstruction formula derived approximates the unknown function, say V, by linear combination of functions obtained by discretizing the continuous distributed approximating functional (DAF) approximation of V over the grid of sampling. The simplest of contracted and ordinary Hermite-DAFs are shown to be sufficient for reconstruction. The linear combination coefficients can be obtained either iteratively or directly by finding the minimal norm least-squares solution of a linear system of equations. Several numerical examples of reconstructing functions of one and two variables, and very different shape are given. The examples demonstrate the robustness, high accuracy, as well as the caveats of the proposed method. As to the mathematical foundation of the method, it is shown that the reconstruction formula can be interpreted as, and in fact is, frame expansion. By recognizing the relevance of frames in determining analytical approximation to potential energy surfaces, an extremely rich and beautiful toolbox of mathematics has come to our disposal. Thus, the simple reconstruction method derived in this paper can be refined, extended, and improved in numerous ways.
NASA Astrophysics Data System (ADS)
Duan, Beiping; Zheng, Zhoushun; Cao, Wen
2016-08-01
In this paper, we revisit two spectral approximations, including truncated approximation and interpolation for Caputo fractional derivative. The two approaches have been studied to approximate Riemann-Liouville (R-L) fractional derivative by Chen et al. and Zayernouri et al. respectively in their most recent work. For truncated approximation the reconsideration partly arises from the difference between fractional derivative in R-L sense and Caputo sense: Caputo fractional derivative requires higher regularity of the unknown than R-L version. Another reason for the reconsideration is that we distinguish the differential order of the unknown with the index of Jacobi polynomials, which is not presented in the previous work. Also we provide a way to choose the index when facing multi-order problems. By using generalized Hardy's inequality, the gap between the weighted Sobolev space involving Caputo fractional derivative and the classical weighted space is bridged, then the optimal projection error is derived in the non-uniformly Jacobi-weighted Sobolev space and the maximum absolute error is presented as well. For the interpolation, analysis of interpolation error was not given in their work. In this paper we build the interpolation error in non-uniformly Jacobi-weighted Sobolev space by constructing fractional inverse inequality. With combining collocation method, the approximation technique is applied to solve fractional initial-value problems (FIVPs). Numerical examples are also provided to illustrate the effectiveness of this algorithm.
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki; Bey, Kim S.; Hou, Gene J. W.
2004-01-01
A recent paper is generalized to a case where the spatial region is taken in R(sup 3). The region is assumed to be a thin body, such as a panel on the wing or fuselage of an aerospace vehicle. The traditional h- as well as hp-finite element methods are applied to the surface defined in the x - y variables, while, through the thickness, the technique of the p-element is employed. Time and spatial discretization scheme based upon an assumption of certain weak singularity of double vertical line u(sub t) double vertical line 2, is used to derive an optimal a priori error estimate for the current method.
Window-based method for approximating the Hausdorff in three-dimensional range imagery
Koch, Mark W.
2009-06-02
One approach to pattern recognition is to use a template from a database of objects and match it to a probe image containing the unknown. Accordingly, the Hausdorff distance can be used to measure the similarity of two sets of points. In particular, the Hausdorff can measure the goodness of a match in the presence of occlusion, clutter, and noise. However, existing 3D algorithms for calculating the Hausdorff are computationally intensive, making them impractical for pattern recognition that requires scanning of large databases. The present invention is directed to a new method that can efficiently, in time and memory, compute the Hausdorff for 3D range imagery. The method uses a window-based approach.
Testing a Novel Method to Approximate Wood Specific Gravity of Trees
Michael C. Wiemann; G. Bruce. Williamson
2012-01-01
Wood specific gravity (SG) has long been used by foresters as an index for wood properties. More recently, SG has been widely used by ecologists as a plant functional trait and as a key variable in estimates of biomass. However, sampling wood to determine SG can be problematic; at present, the most common method is sampling with an increment borer to extract a bark-to-...
NASA Astrophysics Data System (ADS)
Cao, Zhanli; Wang, Fan; Yang, Mingli
2016-10-01
Various approximate approaches to calculate cluster amplitudes in equation-of-motion coupled-cluster (EOM-CC) approaches for ionization potentials (IP) and electron affinities (EA) with spin-orbit coupling (SOC) included in post self-consistent field (SCF) calculations are proposed to reduce computational effort. Our results indicate that EOM-CC based on cluster amplitudes from the approximate method CCSD-1, where the singles equation is the same as that in CCSD and the doubles amplitudes are approximated with MP2, is able to provide reasonable IPs and EAs when SOC is not present compared with CCSD results. It is an economical approach for calculating IPs and EAs and is not as sensitive to strong correlation as CC2. When SOC is included, the approximate method CCSD-3, where the same singles equation as that in SOC-CCSD is used and the doubles equation of scalar-relativistic CCSD is employed, gives rise to IPs and EAs that are in closest agreement with those of CCSD. However, SO splitting with EOM-CC from CC2 generally agrees best with that with CCSD, while that of CCSD-1 and CCSD-3 is less accurate. This indicates that a balanced treatment of SOC effects on both single and double excitation amplitudes is required to achieve reliable SO splitting.
Interpolation and Approximation Theory.
ERIC Educational Resources Information Center
Kaijser, Sten
1991-01-01
Introduced are the basic ideas of interpolation and approximation theory through a combination of theory and exercises written for extramural education at the university level. Topics treated are spline methods, Lagrange interpolation, trigonometric approximation, Fourier series, and polynomial approximation. (MDH)
Approximate Dirichlet Boundary Conditions in the Generalized Finite Element Method (PREPRINT)
2006-02-01
works of Babuška [2, 3], Bramble and Nitsche [13], and Bramble and Schatz [15, 16], among others, for examples of how this approach works in prac...α! = α1! . . . αn!, is the Taylor polynomial of v at y of degree m and Φj ∈ C∞c (g̃−1(ωj)) is a function with integral 1. Then, by the Bramble ...Academic Press, New York, 1972. [13] J.H. Bramble , J.A. Nitsche, A Generalized Ritz–Least–Squares Method for Dirichlet Prob- lems, SIAM J. Numer
Approximate Methods for Obtaining the Complex Natural Electromagnetic Oscillations of an Object.
1984-02-01
studying Prony’s method r for other scatterers and looking also for solutions to the problems inherent in the Prony process . E.M. Kennaugh suggested the...The search procedure is time consuming in machine computing. ILE 3. The search procedure cannot be used to process measured scattering data. 0. POLES...of the extracted poles as P. E. of real part = IReal part (Poleext.Poletrue)l , (3-11) SIPol I-oL etrue P. E. of imaginary part = IImag . part(Poleext
Approximate natural vibration analysis of rectangular plates with openings using assumed mode method
NASA Astrophysics Data System (ADS)
Cho, Dae Seung; Vladimir, Nikola; Choi, Tae MuK
2013-09-01
Natural vibration analysis of plates with openings of different shape represents an important issue in naval architecture and ocean engineering applications. In this paper, a procedure for vibration analysis of plates with openings and arbitrary edge constraints is presented. It is based on the assumed mode method, where natural frequencies and modes are determined by solving an eigenvalue problem of a multi-degree-of-freedom system matrix equation derived by using Lagrange's equations of motion. The presented solution represents an extension of a procedure for natural vibration analysis of rectangular plates without openings, which has been recently presented in the literature. The effect of an opening is taken into account in an intuitive way, i.e. by subtracting its energy from the total plate energy without opening. Illustrative numerical examples include dynamic analysis of rectangular plates with rectangular, elliptic, circular as well as oval openings with various plate thicknesses and different combinations of boundary conditions. The results are compared with those obtained by the finite element method (FEM) as well as those available in the relevant literature, and very good agreement is achieved.
Stewart, James J P
2007-12-01
Several modifications that have been made to the NDDO core-core interaction term and to the method of parameter optimization are described. These changes have resulted in a more complete parameter optimization, called PM6, which has, in turn, allowed 70 elements to be parameterized. The average unsigned error (AUE) between calculated and reference heats of formation for 4,492 species was 8.0 kcal mol(-1). For the subset of 1,373 compounds involving only the elements H, C, N, O, F, P, S, Cl, and Br, the PM6 AUE was 4.4 kcal mol(-1). The equivalent AUE for other methods were: RM1: 5.0, B3LYP 6-31G*: 5.2, PM5: 5.7, PM3: 6.3, HF 6-31G*: 7.4, and AM1: 10.0 kcal mol(-1). Several long-standing faults in AM1 and PM3 have been corrected and significant improvements have been made in the prediction of geometries.
NASA Astrophysics Data System (ADS)
Yang, Xiaofeng; Zhao, Jia; Wang, Qi
2017-03-01
The Molecular Beam Epitaxial model is derived from the variation of a free energy, that consists of either a fourth order Ginzburg-Landau double well potential or a nonlinear logarithmic potential in terms of the gradient of a height function. One challenge in solving the MBE model numerically is how to develop proper temporal discretization for the nonlinear terms in order to preserve energy stability at the time-discrete level. In this paper, we resolve this issue by developing a first and second order time-stepping scheme based on the "Invariant Energy Quadratization" (IEQ) method. The novelty is that all nonlinear terms are treated semi-explicitly, and the resulted semi-discrete equations form a linear system at each time step. Moreover, the linear operator is symmetric positive definite and thus can be solved efficiently. We then prove that all proposed schemes are unconditionally energy stable. The semi-discrete schemes are further discretized in space using finite difference methods and implemented on GPUs for high-performance computing. Various 2D and 3D numerical examples are presented to demonstrate stability and accuracy of the proposed schemes.
2007-01-01
Several modifications that have been made to the NDDO core-core interaction term and to the method of parameter optimization are described. These changes have resulted in a more complete parameter optimization, called PM6, which has, in turn, allowed 70 elements to be parameterized. The average unsigned error (AUE) between calculated and reference heats of formation for 4,492 species was 8.0 kcal mol−1. For the subset of 1,373 compounds involving only the elements H, C, N, O, F, P, S, Cl, and Br, the PM6 AUE was 4.4 kcal mol−1. The equivalent AUE for other methods were: RM1: 5.0, B3LYP 6–31G*: 5.2, PM5: 5.7, PM3: 6.3, HF 6–31G*: 7.4, and AM1: 10.0 kcal mol−1. Several long-standing faults in AM1 and PM3 have been corrected and significant improvements have been made in the prediction of geometries. Figure Calculated structure of the complex ion [Ta6Cl12]2+ (footnote): Reference value in parenthesis Electronic supplementary material The online version of this article (doi:10.1007/s00894-007-0233-4) contains supplementary material, which is available to authorized users. PMID:17828561
NASA Astrophysics Data System (ADS)
Cakmakci, Ozan
today are functions mapping two dimensional vectors to real numbers. The majority of optical designs to-date have relied on conic sections and polynomials as the functions of choice. The choice of conic sections is justified since conic sections are stigmatic surfaces under certain imaging geometries. The choice of polynomials from the point of view of surface description can be challenged. A polynomial surface description may link a designer's understanding of the wavefront aberrations and the surface description. The limitations of using multivariate polynomials are described by a theorem due to Mairhuber and Curtis from approximation theory. This thesis proposes and applies radial basis functions to represent free-form optical surfaces as an alternative to multivariate polynomials. We compare the polynomial descriptions to radial basis functions using the MTF criteria. The benefits of using radial basis functions for surface description are summarized in the context of specific head-worn displays. The benefits include, for example, the performance increase measured by the MTF, or the ability to increase the field of view or pupil size. Even though Zernike polynomials are a complete and orthogonal set of basis over the unit circle and they can be orthogonalized for rectangular or hexagonal pupils using Gram-Schmidt, taking practical considerations into account, such as optimization time and the maximum number of variables available in current raytrace codes, for the specific case of the single off-axis magnifier with a 3 mm pupil, 15 mm eye relief, 24 degree diagonal full field of view, we found the Gaussian radial basis functions to yield a 20% gain in the average MTF at 17 field points compared to a Zernike (using 66 terms) and an x-y polynomial up to and including 10th order. The linear combination of radial basis function representation is not limited to circular apertures. Visualization tools such as field map plots provided by nodal aberration theory have been
Maliassov, S.Y.
1996-12-31
An approach to the construction of an iterative method for solving systems of linear algebraic equations arising from nonconforming finite element discretizations with nonmatching grids for second order elliptic boundary value problems with anisotropic coefficients is considered. The technique suggested is based on decomposition of the original domain into nonoverlapping subdomains. The elliptic problem is presented in the macro-hybrid form with Lagrange multipliers at the interfaces between subdomains. A block diagonal preconditioner is proposed which is spectrally equivalent to the original saddle point matrix and has the optimal order of arithmetical complexity. The preconditioner includes blocks for preconditioning subdomain and interface problems. It is shown that constants of spectral equivalence axe independent of values of coefficients and mesh step size.
2006-07-31
Bramble and Nitsche [12], and Bramble and Schatz [13, 14], among others, for examples of how this approach works in practice. Another approach (used also in...with integral 1. Then, by the Bramble –Hilbert Lemma, we have (43) |v − Pj |Hs(g̃−1(ωj)) ≤ Ch m+1−s k |v|Hm+1(g̃−1(ωj)), for all 0 ≤ s ≤ m+ 1. Consider...New York, 1972. [12] J.H. Bramble , J.A. Nitsche, A Generalized Ritz–Least–Squares Method for Dirichlet Prob- lems, SIAM J. Numer. Anal, vol 10, no. 1
Approximation of mechanical properties of sintered materials with discrete element method
NASA Astrophysics Data System (ADS)
Dosta, Maksym; Besler, Robert; Ziehdorn, Christian; Janßen, Rolf; Heinrich, Stefan
2017-06-01
Sintering process is a key step in ceramic processing, which has strong influence on quality of final product. The final shape, microstructure and mechanical properties, e.g. density, heat conductivity, strength and hardness are depending on the sintering process. In order to characterize mechanical properties of sintered materials, in this contribution we present a microscale modelling approach. This approach consists of three different stages: simulation of the sintering process, transition to final structure and modelling of mechanical behaviour of sintered material with discrete element method (DEM). To validate the proposed simulation approach and to investigate products with varied internal structures alumina powder has been experimentally sintered at different temperatures. The comparison has shown that simulation results are in a very good agreement with experimental data and that the novel strategy can be effectively used for modelling of sintering process.
NASA Astrophysics Data System (ADS)
Syahroni, Edy; Suparmi, A.; Cari, C.
2017-01-01
The spectrum energy’s equation for Killingback potential on the model of DNA and protein interactions was obtained using WKB approximation method. The Killingbeck potential was substituted into the general equation of WKB approximation method to determine the energy. The general equation required the value of critical turning point to complete the form equation. In this research, the general form of Killingbeck potential was causing the equation of critical turning point turn into cube equation. In this case we take the value of critical turning point only with the real value. In mathematical condition, it was satisfied with requirement Discriminant was less than or equal to 0. If D=0, it would give two values of critical turning point and if D<0, it would give three values of critical turning point. In this research we present both of those requirements to complete the general Equation of Energy.
Nakano, Masayoshi Minami, Takuya Fukui, Hitoshi Yoneda, Kyohei Shigeta, Yasuteru Kishi, Ryohei; Champagne, Benoît; Botek, Edith
2015-01-22
We develop a novel method for the calculation and the analysis of the one-electron reduced densities in open-shell molecular systems using the natural orbitals and approximate spin projected occupation numbers obtained from broken symmetry (BS), i.e., spin-unrestricted (U), density functional theory (DFT) calculations. The performance of this approximate spin projection (ASP) scheme is examined for the diradical character dependence of the second hyperpolarizability (γ) using several exchange-correlation functionals, i.e., hybrid and long-range corrected UDFT schemes. It is found that the ASP-LC-UBLYP method with a range separating parameter μ = 0.47 reproduces semi-quantitatively the strongly-correlated [UCCSD(T)] result for p-quinodimethane, i.e., the γ variation as a function of the diradical character.
NASA Astrophysics Data System (ADS)
Renac, Florent
2011-06-01
An algorithm for stabilizing linear iterative schemes is developed in this study. The recursive projection method is applied in order to stabilize divergent numerical algorithms. A criterion for selecting the divergent subspace of the iteration matrix with an approximate eigenvalue problem is introduced. The performance of the present algorithm is investigated in terms of storage requirements and CPU costs and is compared to the original Krylov criterion. Theoretical results on the divergent subspace selection accuracy are established. The method is then applied to the resolution of the linear advection-diffusion equation and to a sensitivity analysis for a turbulent transonic flow in the context of aerodynamic shape optimization. Numerical experiments demonstrate better robustness and faster convergence properties of the stabilization algorithm with the new criterion based on the approximate eigenvalue problem. This criterion requires only slight additional operations and memory which vanish in the limit of large linear systems.
Khatonabadi, Maryam; Zhang, Di; Mathieu, Kelsey; Kim, Hyun J.; Lu, Peiyun; Cody, Dianna; DeMarco, John J.; Cagnon, Chris H.; McNitt-Gray, Michael F.
2012-01-01
Purpose: Most methods to estimate patient dose from computed tomography (CT) exams have been developed based on fixed tube current scans. However, in current clinical practice, many CT exams are performed using tube current modulation (TCM). Detailed information about the TCM function is difficult to obtain and therefore not easily integrated into patient dose estimate methods. The purpose of this study was to investigate the accuracy of organ dose estimates obtained using methods that approximate the TCM function using more readily available data compared to estimates obtained using the detailed description of the TCM function. Methods: Twenty adult female models generated from actual patient thoracic CT exams and 20 pediatric female models generated from whole body PET/CT exams were obtained with IRB (Institutional Review Board) approval. Detailed TCM function for each patient was obtained from projection data. Monte Carlo based models of each scanner and patient model were developed that incorporated the detailed TCM function for each patient model. Lungs and glandular breast tissue were identified in each patient model so that organ doses could be estimated from simulations. Three sets of simulations were performed: one using the original detailed TCM function (x, y, and z modulations), one using an approximation to the TCM function (only the z-axis or longitudinal modulation extracted from the image data), and the third was a fixed tube current simulation using a single tube current value which was equal to the average tube current over the entire exam. Differences from the reference (detailed TCM) method were calculated based on organ dose estimates. Pearson's correlation coefficients were calculated between methods after testing for normality. Equivalence test was performed to compare the equivalence limit between each method (longitudinal approximated TCM and fixed tube current method) and the detailed TCM method. Minimum equivalence limit was reported for
Zhang, Huaguang; Cui, Lili; Zhang, Xin; Luo, Yanhong
2011-12-01
In this paper, a novel data-driven robust approximate optimal tracking control scheme is proposed for unknown general nonlinear systems by using the adaptive dynamic programming (ADP) method. In the design of the controller, only available input-output data is required instead of known system dynamics. A data-driven model is established by a recurrent neural network (NN) to reconstruct the unknown system dynamics using available input-output data. By adding a novel adjustable term related to the modeling error, the resultant modeling error is first guaranteed to converge to zero. Then, based on the obtained data-driven model, the ADP method is utilized to design the approximate optimal tracking controller, which consists of the steady-state controller and the optimal feedback controller. Further, a robustifying term is developed to compensate for the NN approximation errors introduced by implementing the ADP method. Based on Lyapunov approach, stability analysis of the closed-loop system is performed to show that the proposed controller guarantees the system state asymptotically tracking the desired trajectory. Additionally, the obtained control input is proven to be close to the optimal control input within a small bound. Finally, two numerical examples are used to demonstrate the effectiveness of the proposed control scheme.
NASA Astrophysics Data System (ADS)
Efremenko, D.; Doicu, A.; Loyola, D.; Trautmann, T.
2012-04-01
Numerical problems appear when solving the radiative transfer equation for systems with strong anisotropic scattering. To avoid oscillations in the solution a large number of discrete ordinates is required. As a consequence, the computing time increases considerably with O(N3), where N is the number of discrete ordinates. The performance can be improved partially by the delta-M method of Wiscombe [1], but this approach distorts the initial boundary problem and can lead to errors in small viewing angles. The efficiency of the discrete ordinate method with small-angle approximation for analyzing systems containing clouds and coarsest fraction of aerosol has been demonstrated by Budak and Korkin [2]. In this work we extend the plan-parallel version of the discrete ordinate method with small-angle approximation, as described in [2], to a pseudo-spherical atmosphere. The conventional pseudo-spherical technique relies on the separation of the total radiance into the direct solar beam and the diffuse radiance [3];the direct solar radiance is treated in a spherical geometry, while the diffuse radiance is computed in a plane-parallel geometry. Taking into account that in the discrete ordinate method with small-angle approximation, the radiance is separated into an 'anisotropic' and a smooth part, and that the direct solar beam is already included into anisotropic part, we introduce a pseudo-spherical correction by substracting the direct solar beam in a plane-parallel geometry and adding it in a pseudo-spherical geometry. In our simulations we considered a scenario which is typically for the UV/UIS instruments like GOME-2: a spectral interval between 315 nm and 335 nm, and an inhomogeneous atmosphere containing a cloud layer with an asymmetry parameter of 0.9. The numerical results evidenced that the differences between the pseudo-spherical and the plan-parallel models are of about 10 % for an incident angle of 80 degrees, 1 % for 65 degrees and less than 0.3 % for 50
NASA Astrophysics Data System (ADS)
Asenchik, O. D.
2017-02-01
A method of approximate calculation of the interaction inverse matrix in the method of discrete dipoles is proposed. The knowledge of this matrix makes it possible to determine the optical response of a system to the action of an electromagnetic wave with an arbitrary shape, which can be represented as a combination of vector spherical wave functions. The number of calculation operations of the matrix in the proposed method is considerably smaller than in the case of its direct calculation. In the case of a change in the refractive index of scattering particles, two methods of approximate calculation of the interaction inverse matrix are also proposed. This makes it possible to calculate the optical response of systems with new characteristics without direct solving equations of a system with a large dimension. The accuracy of the methods is numerically determined for particles with spherical and cubic shapes. It is shown that the methods are computationally efficient and can be used to calculate the values of polarization vectors inside particles and extinction and absorption cross sections of systems.
NASA Astrophysics Data System (ADS)
Viquerat, Jonathan; Lanteri, Stéphane
2016-01-01
During the last ten years, the discontinuous Galerkin time-domain (DGTD) method has progressively emerged as a viable alternative to well established finite-difference time-domain (FDTD) and finite-element time-domain (FETD) methods for the numerical simulation of electromagnetic wave propagation problems in the time-domain. The method is now actively studied in various application contexts including those requiring to model light/matter interactions on the nanoscale. Several recent works have demonstrated the viability of the DGDT method for nanophotonics. In this paper we further demonstrate the capabilities of the method for the simulation of near-field plasmonic interactions by considering more particularly the possibility of combining the use of a locally refined conforming tetrahedral mesh with a local adaptation of the approximation order.
NASA Astrophysics Data System (ADS)
Zhou, Kang; Hou, Jian; Fu, Hongfei; Wei, Bei; Liu, Yongge
2017-01-01
Relative permeability controls the flow of multiphase fluids in porous media. The estimation of relative permeability is generally solved by Levenberg-Marquardt method with finite difference Jacobian approximation (LM-FD). However, the method can hardly be used in large-scale reservoirs because of unbearably huge computational cost. To eliminate this problem, the paper introduces the idea of simultaneous perturbation to simplify the generation of the Jacobian matrix needed in the Levenberg-Marquardt procedure and denotes the improved method as LM-SP. It is verified by numerical experiments and then applied to laboratory experiments and a real commercial oilfield. Numerical experiment indicates that LM-SP uses only 16.1% computational cost to obtain similar estimation of relative permeability and prediction of production performance compared with LM-FD. Laboratory experiment also shows the LM-SP has a 60.4% decrease in simulation cost while a 68.5% increase in estimation accuracy compared with the earlier published results. This is mainly because LM-FD needs 2n (n is the number of controlling knots) simulations to approximate Jacobian in each iteration, while only 2 simulations are enough in basic LM-SP. The convergence rate and estimation accuracy of LM-SP can be improved by averaging several simultaneous perturbation Jacobian approximations but the computational cost of each iteration may be increased. Considering the estimation accuracy and computational cost, averaging two Jacobian approximations is recommended in this paper. As the number of unknown controlling knots increases from 7 to 15, the saved simulation runs by LM-SP than LM-FD increases from 114 to 1164. This indicates LM-SP is more suitable than LM-FD for multivariate problems. Field application further proves the applicability of LM-SP on large real field as well as small laboratory problems.
NASA Astrophysics Data System (ADS)
Fernández-Seivane, L.; Oliveira, M. A.; Sanvito, S.; Ferrer, J.
2006-08-01
We propose a computational method that drastically simplifies the inclusion of the spin-orbit interaction in density functional theory when implemented over localized basis sets. Our method is based on a well-known procedure for obtaining pseudopotentials from atomic relativistic ab initio calculations and on an on-site approximation for the spin-orbit matrix elements. We have implemented the technique in the SIESTA (Soler J M et al 2002 J. Phys.: Condens. Matter 14 2745-79) code, and show that it provides accurate results for the overall band-structure and splittings of group IV and III-IV semiconductors as well as for 5d metals.
Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.
1997-01-01
Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525
NASA Astrophysics Data System (ADS)
Bruno, Luigi
2016-12-01
With the present paper, the author proposes a fitting method for approximating experimental data retrieved from any full-field technique. Unlike most of the fitting procedures, the method works on data distributed on a surface of any shape, and the mathematical model is able to take into account of both the 3D shape of the surface and of the experimental quantity to be fitted. The paper reports all the mathematical steps necessary for applying the method, which was tested on two sets of experimental data obtained by an out-of-plane speckle interferometer working in two different conditions of noise. Experimental results showed the capability of the method to work in presence of high level of noise.
NASA Astrophysics Data System (ADS)
Assous, Franck; Chaskalovic, Joël
2013-03-01
In this Note, we propose a new methodology based on exploratory data mining techniques to evaluate the errors due to the description of a given real system. First, we decompose this description error into four types of sources. Then, we construct databases of the entire information produced by different numerical approximation methods, to assess and compare the significant differences between these methods, using techniques like decision trees, Kohonen's cards, or neural networks. As an example, we characterize specific states of the real system for which we can locally appreciate the accuracy between two kinds of finite elements methods. In this case, this allowed us to precise the classical Bramble-Hilbert theorem that gives a global error estimate, whereas our approach gives a local error estimate.
NASA Technical Reports Server (NTRS)
Stiehl, A. L.; Haberman, R. C.; Cowles, J. H.
1988-01-01
An approximate method to compute the maximum deformation and permanent set of a beam subjected to shock wave laoding in vacuo and in water was investigated. The method equates the maximum kinetic energy of the beam (and water) to the elastic plastic work done by a static uniform load applied to a beam. Results for the water case indicate that the plastic deformation is controlled by the kinetic energy of the water. The simplified approach can result in significant savings in computer time or it can expediently be used as a check of results from a more rigorous approach. The accuracy of the method is demonstrated by various examples of beams with simple support and clamped support boundary conditions.
NASA Astrophysics Data System (ADS)
Amalina Nisa Ariffin, Noor; Rosli, Norhayati; Syahidatul Ayuni Mazlan, Mazma; Samsudin, Adam
2017-09-01
Recently, modelling the biological systems by using stochastic differential equations (SDEs) are becoming an interest among researchers. In SDEs the random fluctuations are taking into account, which resulting to the complexity of finding the exact solution of SDEs and contribute to the increasing number of research focusing in finding the best numerical approach to solve the systems of SDEs. This paper will examine the performance of 4-stage stochastic Runge-Kutta (SRK4) and specific stochastic Runge-Kutta (SRKS) methods with order 1.5 in approximating the solution of stochastic model in biological system. A comparative study of SRK4 and SRKS method will be presented in this paper. The non-linear biological model will be used to examine the performance of both methods and the result of numerical experiment will be discussed.
Nakatsuka, Yutaka; Nakajima, Takahito; Hirao, Kimihiko
2010-05-07
A cusp correction scheme for the relativistic zeroth-order regular approximation (ZORA) quantum Monte Carlo method is proposed by extending the nonrelativistic cusp correction scheme of Ma et al. [J. Chem. Phys. 122, 224322 (2005)]. In this scheme, molecular orbitals that appear in Slater-Jastrow type wave functions are replaced with the exponential-type correction functions within a correction radius. Analysis of the behavior of the ZORA local energy in electron-nucleus collisions reveals that the Kato's cusp condition is not applicable to the ZORA QMC method. The divergence of the electron-nucleus Coulomb potential term in the ZORA local energy is remedied by adding a new logarithmic correction term. This method is shown to be useful for improving the numerical stability of the ZORA-QMC calculations using both Gaussian and Slater basis functions.
NASA Astrophysics Data System (ADS)
Moreno, Javier; Somolinos, Álvaro; Romero, Gustavo; González, Iván; Cátedra, Felipe
2017-08-01
A method for the rigorous computation of the electromagnetic scattering of large dielectric volumes is presented. One goal is to simplify the analysis of large dielectric targets with translational symmetries taken advantage of their Toeplitz symmetry. Then, the matrix-fill stage of the Method of Moments is efficiently obtained because the number of coupling terms to compute is reduced. The Multilevel Fast Multipole Method is applied to solve the problem. Structured meshes are obtained efficiently to approximate the dielectric volumes. The regular mesh grid is achieved by using parallelepipeds whose centres have been identified as internal to the target. The ray casting algorithm is used to classify the parallelepiped centres. It may become a bottleneck when too many points are evaluated in volumes defined by parametric surfaces, so a hierarchical algorithm is proposed to minimize the number of evaluations. Measurements and analytical results are included for validation purposes.
Goldstein, Darlene R
2006-10-01
Studies of gene expression using high-density short oligonucleotide arrays have become a standard in a variety of biological contexts. Of the expression measures that have been proposed to quantify expression in these arrays, multi-chip-based measures have been shown to perform well. As gene expression studies increase in size, however, utilizing multi-chip expression measures is more challenging in terms of computing memory requirements and time. A strategic alternative to exact multi-chip quantification on a full large chip set is to approximate expression values based on subsets of chips. This paper introduces an extrapolation method, Extrapolation Averaging (EA), and a resampling method, Partition Resampling (PR), to approximate expression in large studies. An examination of properties indicates that subset-based methods can perform well compared with exact expression quantification. The focus is on short oligonucleotide chips, but the same ideas apply equally well to any array type for which expression is quantified using an entire set of arrays, rather than for only a single array at a time. Software implementing Partition Resampling and Extrapolation Averaging is under development as an R package for the BioConductor project.
Hromadka, T.V.; Guymon, G.L.
1985-01-01
An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to apply to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element method. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.
Regnier, D.; Verriere, M.; Dubray, N.; Schunck, N.
2015-11-30
In this study, we describe the software package FELIX that solves the equations of the time-dependent generator coordinate method (TDGCM) in NN-dimensions (N ≥ 1) under the Gaussian overlap approximation. The numerical resolution is based on the Galerkin finite element discretization of the collective space and the Crank–Nicolson scheme for time integration. The TDGCM solver is implemented entirely in C++. Several additional tools written in C++, Python or bash scripting language are also included for convenience. In this paper, the solver is tested with a series of benchmarks calculations. We also demonstrate the ability of our code to handle a realistic calculation of fission dynamics.
NASA Astrophysics Data System (ADS)
Rahmah, Z.; Subartini, B.; Djauhari, E.; Anggriani, N.; Supriatna, A. K.
2017-03-01
Tuberculosis (TB) is a disease that is infected by the bacteria Mycobacterium tuberculosis. The World Health Organization (WHO) recommends to implement the Baccilus Calmete Guerin (BCG) vaccine in toddler aged two to three months to be protected from the infection. This research explores the numerical simulation of forward-backward difference approximation method on the model of TB transmission considering this vaccination program. The model considers five compartments of sub-populations, i.e. susceptible, vaccinated, exposed, infected, and recovered human sub-populations. We consider here the vaccination as a control variable. The results of the simulation showed that vaccination can indeed reduce the number of infected human population.
Portier, Benjamin; Pardo, Fabrice; Bouchon, Patrick; Haïdar, Riad; Pelouard, Jean-Luc
2013-04-01
We present a modal method for the fast analysis of 2D-layered gratings. It combines exact discrete formulations of Maxwell equations in 2D space with polynomial approximations of the constitutive equations, and provides a sparse formulation of the eigenvalue equations. In specific cases, the use of sparse matrices allows us to calculate the electromagnetic response while solving only a small fraction of the eigenmodes. This significantly increases computational speed up to 100×, as shown on numerical examples of both dielectric and metallic subwavelength gratings.
NASA Astrophysics Data System (ADS)
Rossi, Mariana; Liu, Hanchao; Paesani, Francesco; Bowman, Joel; Ceriotti, Michele
2014-11-01
Including quantum mechanical effects on the dynamics of nuclei in the condensed phase is challenging, because the complexity of exact methods grows exponentially with the number of quantum degrees of freedom. Efforts to circumvent these limitations can be traced down to two approaches: methods that treat a small subset of the degrees of freedom with rigorous quantum mechanics, considering the rest of the system as a static or classical environment, and methods that treat the whole system quantum mechanically, but using approximate dynamics. Here, we perform a systematic comparison between these two philosophies for the description of quantum effects in vibrational spectroscopy, taking the Embedded Local Monomer model and a mixed quantum-classical model as representatives of the first family of methods, and centroid molecular dynamics and thermostatted ring polymer molecular dynamics as examples of the latter. We use as benchmarks D2O doped with HOD and pure H2O at three distinct thermodynamic state points (ice Ih at 150 K, and the liquid at 300 K and 600 K), modeled with the simple q-TIP4P/F potential energy and dipole moment surfaces. With few exceptions the different techniques yield IR absorption frequencies that are consistent with one another within a few tens of cm-1. Comparison with classical molecular dynamics demonstrates the importance of nuclear quantum effects up to the highest temperature, and a detailed discussion of the discrepancies between the various methods let us draw some (circumstantial) conclusions about the impact of the very different approximations that underlie them. Such cross validation between radically different approaches could indicate a way forward to further improve the state of the art in simulations of condensed-phase quantum dynamics.
Rossi, Mariana; Liu, Hanchao; Paesani, Francesco; Bowman, Joel; Ceriotti, Michele
2014-11-14
Including quantum mechanical effects on the dynamics of nuclei in the condensed phase is challenging, because the complexity of exact methods grows exponentially with the number of quantum degrees of freedom. Efforts to circumvent these limitations can be traced down to two approaches: methods that treat a small subset of the degrees of freedom with rigorous quantum mechanics, considering the rest of the system as a static or classical environment, and methods that treat the whole system quantum mechanically, but using approximate dynamics. Here, we perform a systematic comparison between these two philosophies for the description of quantum effects in vibrational spectroscopy, taking the Embedded Local Monomer model and a mixed quantum-classical model as representatives of the first family of methods, and centroid molecular dynamics and thermostatted ring polymer molecular dynamics as examples of the latter. We use as benchmarks D2O doped with HOD and pure H2O at three distinct thermodynamic state points (ice Ih at 150 K, and the liquid at 300 K and 600 K), modeled with the simple q-TIP4P/F potential energy and dipole moment surfaces. With few exceptions the different techniques yield IR absorption frequencies that are consistent with one another within a few tens of cm(-1). Comparison with classical molecular dynamics demonstrates the importance of nuclear quantum effects up to the highest temperature, and a detailed discussion of the discrepancies between the various methods let us draw some (circumstantial) conclusions about the impact of the very different approximations that underlie them. Such cross validation between radically different approaches could indicate a way forward to further improve the state of the art in simulations of condensed-phase quantum dynamics.
Rossi, Mariana; Liu, Hanchao; Bowman, Joel; Paesani, Francesco; Ceriotti, Michele
2014-11-14
Including quantum mechanical effects on the dynamics of nuclei in the condensed phase is challenging, because the complexity of exact methods grows exponentially with the number of quantum degrees of freedom. Efforts to circumvent these limitations can be traced down to two approaches: methods that treat a small subset of the degrees of freedom with rigorous quantum mechanics, considering the rest of the system as a static or classical environment, and methods that treat the whole system quantum mechanically, but using approximate dynamics. Here, we perform a systematic comparison between these two philosophies for the description of quantum effects in vibrational spectroscopy, taking the Embedded Local Monomer model and a mixed quantum-classical model as representatives of the first family of methods, and centroid molecular dynamics and thermostatted ring polymer molecular dynamics as examples of the latter. We use as benchmarks D{sub 2}O doped with HOD and pure H{sub 2}O at three distinct thermodynamic state points (ice Ih at 150 K, and the liquid at 300 K and 600 K), modeled with the simple q-TIP4P/F potential energy and dipole moment surfaces. With few exceptions the different techniques yield IR absorption frequencies that are consistent with one another within a few tens of cm{sup −1}. Comparison with classical molecular dynamics demonstrates the importance of nuclear quantum effects up to the highest temperature, and a detailed discussion of the discrepancies between the various methods let us draw some (circumstantial) conclusions about the impact of the very different approximations that underlie them. Such cross validation between radically different approaches could indicate a way forward to further improve the state of the art in simulations of condensed-phase quantum dynamics.
NASA Astrophysics Data System (ADS)
Meshram, M. C.
2013-07-01
The Lewis-Kraichnan space-time version of Hopf functional formalism is considered for the investigation of turbulence with reacting and mixing chemical elements of type A + B → Product. The equations of motion are written in Fourier space. We first define the characteristic functional (or the moments generating functional) for the joint probability distribution of the velocity vector of the flow field and the reactants’ concentration scalar fields and translate the equations of motion in terms of the differential equations for the characteristic functional. These differential equations for the characteristic functional are further written in terms of the second characteristic functional (or the cumulant generating functional). This helps us in obtaining the equations for various order cumulants. We note from these equations for cumulants the characteristic difficulty of the theory of turbulence that the (n + 1)th order cumulant C(n+1) occurs in the equation for the dynamics of nth order cumulant Cn. We use the factorized cumulant expansion approximation method for the present investigation. Under this approximation an arbitrary nth order cumulant Cn is expressed in terms of the lower-order cumulants C(2), C(3) and C(n-1) and thus we obtain a closed but untruncated system of equations for the cumulants. On using the factorized fourth-cumulant approximation method a closed set of equations for the reactants’ energy spectrum functions and the reactants’ energy transfer functions are derived. These equations are solved numerically and the similarity laws of the solutions are derived analytically. The statistical quantities such as the reactants’ energy, the reactants’ enstrophy, the reactants’ scale of segregations and so on are calculated numerically and the statistical laws of these quantities are discussed. Also, the scope of this tool for investigation of turbulent phenomena not covered in the present study is discussed.
NASA Astrophysics Data System (ADS)
MacArt, Jonathan F.; Mueller, Michael E.
2016-12-01
Two formally second-order accurate, semi-implicit, iterative methods for the solution of scalar transport-reaction equations are developed for Direct Numerical Simulation (DNS) of low Mach number turbulent reacting flows. The first is a monolithic scheme based on a linearly implicit midpoint method utilizing an approximately factorized exact Jacobian of the transport and reaction operators. The second is an operator splitting scheme based on the Strang splitting approach. The accuracy properties of these schemes, as well as their stability, cost, and the effect of chemical mechanism size on relative performance, are assessed in two one-dimensional test configurations comprising an unsteady premixed flame and an unsteady nonpremixed ignition, which have substantially different Damköhler numbers and relative stiffness of transport to chemistry. All schemes demonstrate their formal order of accuracy in the fully-coupled convergence tests. Compared to a (non-)factorized scheme with a diagonal approximation to the chemical Jacobian, the monolithic, factorized scheme using the exact chemical Jacobian is shown to be both more stable and more economical. This is due to an improved convergence rate of the iterative procedure, and the difference between the two schemes in convergence rate grows as the time step increases. The stability properties of the Strang splitting scheme are demonstrated to outpace those of Lie splitting and monolithic schemes in simulations at high Damköhler number; however, in this regime, the monolithic scheme using the approximately factorized exact Jacobian is found to be the most economical at practical CFL numbers. The performance of the schemes is further evaluated in a simulation of a three-dimensional, spatially evolving, turbulent nonpremixed planar jet flame.
NASA Astrophysics Data System (ADS)
Vanhimbeeck, Marc
In this thesis a technique is developed to determine the low-energy eigensolutions of an unspecified few-level system which is coupled both linearly and quadratically to a finite collection of harmonic oscillators. The method is based on the second-order symmetrized Trotter-Suzuki approximation for e^{lambda} ^{H} with H standing for the Hamiltonian of the quantum mechanical system. Taking lambda = -beta (real), we use e ^{beta}^{H} as a projection operator which sorts out the low-energy eigenstates from the decomposition of an initially randomly constructed system-state. Once the eigenstates are found, a second approximation on the time propagator e ^{-itH} is applied in order to determine some relevant time-correlation functions for the systems under study. Next to a general formulation of the theory we also provide a study of some example systems. The coupled two-level system is shown to account phenomenologically for the anomalous isotope shift which was observed in the Raman spectrum of the tunneling Li^+ defect in KCl. Furthermore, we examine the low-energy eigenvalues and Ham-reduction factors for some of the cubic Jahn-Teller (JT) systems. The triplet systems T otimes tau _2 and T otimes epsilon are studied with a linear JT-interaction but for the E otimes epsilon doublet system a quadratic warping is included in the description. The results are in good agreement with the literature and confirm the applicability of the method.
NASA Astrophysics Data System (ADS)
Irons, F. E.
2003-08-01
To reduce the general formula for lattice specific heat to Einstein's formula of 1907, one traditionally models the spectrum of lattice modes-of-vibration as a set of independent oscillators all of one frequency, nu(1). Not only is this a poor representation of a real solid, but no formula is provided for the frequency nu(1), which has to be determined empirically. We offer a new and more compelling method for reducing the general formula to Einstein's formula. The reduction involves a simple mathematical approximation, proceeds without any reference to independent oscillators all of one frequency, and leads to a formula for the characteristic frequency, nu(1), equal to the mean modal frequency. The mathematical approximation is valid at all but low temperatures, thereby providing insight into the failure of Einstein's formula at low temperatures. A simple extension of the new method leads to the Nernst-Lindemann formula for specific heat, proposed in 1911 on the basis of trial and error and currently without a sound theoretical basis. Empirical values (from the literature) of the frequencies that characterize the Einstein, the Nernst-Lindemann, and also the Debye formulae are all in support of the present theory.
Bai, Shuming; Xie, Weiwei; Zhu, Lili; Shi, Qiang
2014-02-28
We investigate the calculation of absorption spectra based on the mixed quantum classical Liouville equation (MQCL) methods. It has been shown previously that, for a single excited state, the averaged classical dynamics approach to calculate the linear and nonlinear spectroscopy can be derived using the MQCL formalism. This work focuses on problems involving multiple coupled excited state surfaces, such as in molecular aggregates and in the cases of coupled electronic states. A new equation of motion to calculate the dipole-dipole correlation functions within the MQCL formalism is first presented. Two approximate methods are then proposed to solve the resulted equations of motion. The first approximation results in a mean field approach, where the nuclear dynamics is governed by averaged forces depending on the instantaneous electronic states. A modification to the mean field approach based on first order moment expansion is also proposed. Numerical examples including calculation of the absorption spectra of Frenkel exciton models of molecular aggregates, and the pyrazine molecule are presented.
Bai, Shuming; Xie, Weiwei; Zhu, Lili; Shi, Qiang
2014-02-28
We investigate the calculation of absorption spectra based on the mixed quantum classical Liouville equation (MQCL) methods. It has been shown previously that, for a single excited state, the averaged classical dynamics approach to calculate the linear and nonlinear spectroscopy can be derived using the MQCL formalism. This work focuses on problems involving multiple coupled excited state surfaces, such as in molecular aggregates and in the cases of coupled electronic states. A new equation of motion to calculate the dipole-dipole correlation functions within the MQCL formalism is first presented. Two approximate methods are then proposed to solve the resulted equations of motion. The first approximation results in a mean field approach, where the nuclear dynamics is governed by averaged forces depending on the instantaneous electronic states. A modification to the mean field approach based on first order moment expansion is also proposed. Numerical examples including calculation of the absorption spectra of Frenkel exciton models of molecular aggregates, and the pyrazine molecule are presented.
Iterative methods for 3D implicit finite-difference migration using the complex Padé approximation
NASA Astrophysics Data System (ADS)
Costa, Carlos A. N.; Campos, Itamara S.; Costa, Jessé C.; Neto, Francisco A.; Schleicher, Jörg; Novais, Amélia
2013-08-01
Conventional implementations of 3D finite-difference (FD) migration use splitting techniques to accelerate performance and save computational cost. However, such techniques are plagued with numerical anisotropy that jeopardises the correct positioning of dipping reflectors in the directions not used for the operator splitting. We implement 3D downward continuation FD migration without splitting using a complex Padé approximation. In this way, the numerical anisotropy is eliminated at the expense of a computationally more intensive solution of a large-band linear system. We compare the performance of the iterative stabilized biconjugate gradient (BICGSTAB) and that of the multifrontal massively parallel direct solver (MUMPS). It turns out that the use of the complex Padé approximation not only stabilizes the solution, but also acts as an effective preconditioner for the BICGSTAB algorithm, reducing the number of iterations as compared to the implementation using the real Padé expansion. As a consequence, the iterative BICGSTAB method is more efficient than the direct MUMPS method when solving a single term in the Padé expansion. The results of both algorithms, here evaluated by computing the migration impulse response in the SEG/EAGE salt model, are of comparable quality.
NASA Technical Reports Server (NTRS)
Sidi, Avram
1992-01-01
Let F(z) be a vectored-valued function F: C approaches C sup N, which is analytic at z=0 and meromorphic in a neighborhood of z=0, and let its Maclaurin series be given. We use vector-valued rational approximation procedures for F(z) that are based on its Maclaurin series in conjunction with power iterations to develop bona fide generalizations of the power method for an arbitrary N X N matrix that may be diagonalizable or not. These generalizations can be used to obtain simultaneously several of the largest distinct eigenvalues and the corresponding invariant subspaces, and present a detailed convergence theory for them. In addition, it is shown that the generalized power methods of this work are equivalent to some Krylov subspace methods, among them the methods of Arnoldi and Lanczos. Thus, the theory provides a set of completely new results and constructions for these Krylov subspace methods. This theory suggests at the same time a new mode of usage for these Krylov subspace methods that were observed to possess computational advantages over their common mode of usage.
On the convergence of local approximations to pseudodifferential operators with applications
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1994-01-01
We consider the approximation of a class pseudodifferential operators by sequences of operators which can be expressed as compositions of differential operators and their inverses. We show that the error in such approximations can be bounded in terms of L(1) error in approximating a convolution kernel, and use this fact to develop convergence results. Our main result is a finite time convergence analysis of the Engquist-Majda Pade approximants to the square root of the d'Alembertian. We also show that no spatially local approximation to this operator can be convergent uniformly in time. We propose some temporally local but spatially nonlocal operators with better long time behavior. These are based on Laguerre and exponential series.
Higher-order numerical methods derived from three-point polynomial interpolation
NASA Technical Reports Server (NTRS)
Rubin, S. G.; Khosla, P. K.
1976-01-01
Higher-order collocation procedures resulting in tridiagonal matrix systems are derived from polynomial spline interpolation and Hermitian finite-difference discretization. The equations generally apply for both uniform and variable meshes. Hybrid schemes resulting from different polynomial approximations for first and second derivatives lead to the nonuniform mesh extension of the so-called compact or Pade difference techniques. A variety of fourth-order methods are described and this concept is extended to sixth-order. Solutions with these procedures are presented for the similar and non-similar boundary layer equations with and without mass transfer, the Burgers equation, and the incompressible viscous flow in a driven cavity. Finally, the interpolation procedure is used to derive higher-order temporal integration schemes and results are shown for the diffusion equation.
NASA Astrophysics Data System (ADS)
Kabardiadi, Alexander; Greiner, Andreas; Assmann, Heiko; Baselt, Tobias; Hartmann, Peter
2016-03-01
The measurement of a wavefront is a powerful tool for characterizing optical systems. The most commonly used wavefront measurement technique is the method of local-light aberrometry. The conventional version of this kind of measurement principle is the Hartmann-Shack wavefront sensor. This method returns the result of the matrix of spatially-resolved gradients of the wavefront. However, the last and crucial step of the wavefront analysis is the reconstruction of the wavefront from the measured data packets. The issues of the measurement preparation and design are interesting in the same volume. The work presented here describes the comparison between a Fourier-Iteration algorithm and the Zernike approximation method for the wavefront reconstruction in relation to the measurement design. In the context of this work, the term "design of the measurement" refers to the issue of the number and relative positions of the measurement points. In this work, the behavior of the wavefront reconstruction method using Monte-Carlo simulations was analyzed. The optimum point distribution was found and a validation parameter to describe the impact of measurement errors on the analysis results was determined. Based on this parameter, a Monte-Carlo based simulation to make the design of the experiment with the highest accuracy was realized. The technique of white noise injection was implemented in the reconstruction routine and the propagation of errors was analyzed. The presented comparison technique was applied to determine the optimum measurement positions over the beam's surface.
Wang, S.W.; Georgopoulos, P.G.; Li, G.; Rabitz, H.
1998-07-01
Atmospheric chemistry mechanisms are the most computationally intensive components of photochemical air quality simulation models (PAQSMs). The development of a photochemical mechanism, that accurately describes atmospheric chemistry while being computationally efficient for use in PAQSMs, is a difficult undertaking that has traditionally been pursued through semiempirical (diagnostic) lumping approaches. The limitations of these diagnostic approaches are often associated with inaccuracies due to the fact that the lumped mechanisms have typically been optimized to fit the concentration profile of a specific species. Formal mathematical methods for model reduction have the potential (demonstrated through past applications in other areas) to provide very effective solutions to the need for computational efficiency combined with accuracy. Such methods, that can be used to condense a chemical mechanism, include kinetic lumping and domain separation. An application of the kinetic lumping method, using the direct constrained approximately lumping (DCAL) approach, to the atmospheric photochemistry of alkanes is presented in this work. It is shown that the lumped mechanism generated through the application of the DCAL method has the potential to overcome the limitations of existing semiempirical approaches, especially in relation to the consistent and accurate calculation of the time-concentration profiles of multiple species.
NASA Technical Reports Server (NTRS)
Mair, R. W.; Sen, P. N.; Hurlimann, M. D.; Patz, S.; Cory, D. G.; Walsworth, R. L.
2002-01-01
We report a systematic study of xenon gas diffusion NMR in simple model porous media, random packs of mono-sized glass beads, and focus on three specific areas peculiar to gas-phase diffusion. These topics are: (i) diffusion of spins on the order of the pore dimensions during the application of the diffusion encoding gradient pulses in a PGSE experiment (breakdown of the narrow pulse approximation and imperfect background gradient cancellation), (ii) the ability to derive long length scale structural information, and (iii) effects of finite sample size. We find that the time-dependent diffusion coefficient, D(t), of the imbibed xenon gas at short diffusion times in small beads is significantly affected by the gas pressure. In particular, as expected, we find smaller deviations between measured D(t) and theoretical predictions as the gas pressure is increased, resulting from reduced diffusion during the application of the gradient pulse. The deviations are then completely removed when water D(t) is observed in the same samples. The use of gas also allows us to probe D(t) over a wide range of length scales and observe the long time asymptotic limit which is proportional to the inverse tortuosity of the sample, as well as the diffusion distance where this limit takes effect (approximately 1-1.5 bead diameters). The Pade approximation can be used as a reference for expected xenon D(t) data between the short and the long time limits, allowing us to explore deviations from the expected behavior at intermediate times as a result of finite sample size effects. Finally, the application of the Pade interpolation between the long and the short time asymptotic limits yields a fitted length scale (the Pade length), which is found to be approximately 0.13b for all bead packs, where b is the bead diameter. c. 2002 Elsevier Sciences (USA).
NASA Astrophysics Data System (ADS)
Deta, U. A.; Suparmi, Cari
2013-09-01
The approximate analytical solution of Schrodinger equation in D-Dimensions for Scarf trigonometry potential were investigated using Nikiforov-Uvarov method. The bound state energy are given in the close form and the corresponding wave function for arbitary l-state in D-dimensions are formulated in the form of generalized Jacobi Polynomials. The example of bound state energy and wave function in 3, 4, and 5 dimensions presented in condition of ground state to second excited state. The existence of arbitrary dimensions increase bound state energy and the amplitude of the wave function of this potential. The effect of the presence of Scarf trigonometry potential increase the energy spectrum of this potential.
NASA Astrophysics Data System (ADS)
Yi, Longtao; Sun, Tianxi; Wang, Kai; Qin, Min; Yang, Kui; Wang, Jinbang; Liu, Zhiguo
2016-08-01
Confocal three-dimensional micro X-ray fluorescence (3D MXRF) is an excellent surface analysis technology. For a confocal structure, only the X-rays from the confocal volume can be detected. Confocal 3D MXRF has been widely used for analysing elements, the distribution of elements and 3D image of some special samples. However, it has rarely been applied to analysing surface topography by surface scanning. In this paper, a confocal 3D MXRF technology based on polycapillary X-ray optics was proposed for determining surface topography. A corresponding surface adaptive algorithm based on a progressive approximation method was designed to obtain surface topography. The surface topography of the letter "R" on a coin of the People's Republic of China and a small pit on painted pottery were obtained. The surface topography of the "R" and the pit are clearly shown in the two figures. Compared with the method in our previous study, it exhibits a higher scanning efficiency. This approach could be used for two-dimensional (2D) elemental mapping or 3D elemental voxel mapping measurements as an auxiliary method. It also could be used for analysing elemental mapping while obtaining the surface topography of a sample in 2D elemental mapping measurement.
Ball, R D
2001-11-01
We describe an approximate method for the analysis of quantitative trait loci (QTL) based on model selection from multiple regression models with trait values regressed on marker genotypes, using a modification of the easily calculated Bayesian information criterion to estimate the posterior probability of models with various subsets of markers as variables. The BIC-delta criterion, with the parameter delta increasing the penalty for additional variables in a model, is further modified to incorporate prior information, and missing values are handled by multiple imputation. Marginal probabilities for model sizes are calculated, and the posterior probability of nonzero model size is interpreted as the posterior probability of existence of a QTL linked to one or more markers. The method is demonstrated on analysis of associations between wood density and markers on two linkage groups in Pinus radiata. Selection bias, which is the bias that results from using the same data to both select the variables in a model and estimate the coefficients, is shown to be a problem for commonly used non-Bayesian methods for QTL mapping, which do not average over alternative possible models that are consistent with the data.
NASA Astrophysics Data System (ADS)
Büsing, Henrik
2013-04-01
Two-phase flow in porous media occurs in various settings, such as the sequestration of CO2 in the subsurface, radioactive waste management, the flow of oil or gas in hydrocarbon reservoirs, or groundwater remediation. To model the sequestration of CO2, we consider a fully coupled formulation of the system of nonlinear, partial differential equations. For the solution of this system, we employ the Box method after Huber & Helmig (2000) for the space discretization and the fully implicit Euler method for the time discretization. After linearization with Newton's method, it remains to solve a linear system in every Newton step. We compare different iterative methods (BiCGStab, GMRES, AGMG, c.f., [Notay (2012)]) combined with different preconditioners (ILU0, ASM, Jacobi, and AMG as preconditioner) for the solution of these systems. The required Jacobians can be obtained elegantly with automatic differentiation (AD) [Griewank & Walther (2008)], a source code transformation providing exact derivatives. We compare the performance of the different iterative methods with their respective preconditioners for these linear systems. Furthermore, we analyze linear systems obtained by approximating the Jacobian with finite differences in terms of Newton steps per time step, steps of the iterative solvers and the overall solution time. Finally, we study the influence of heterogeneities in permeability and porosity on the performance of the iterative solvers and their robustness in this respect. References [Griewank & Walther(2008)] Griewank, A. & Walther, A., 2008. Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation, SIAM, Philadelphia, PA, 2nd edn. [Huber & Helmig(2000)] Huber, R. & Helmig, R., 2000. Node-centered finite volume discretizations for the numerical simulation of multiphase flow in heterogeneous porous media, Computational Geosciences, 4, 141-164. [Notay(2012)] Notay, Y., 2012. Aggregation-based algebraic multigrid for convection
NASA Astrophysics Data System (ADS)
Neese, Frank; Wennmohs, Frank; Hansen, Andreas
2009-03-01
Coupled-electron pair approximations (CEPAs) and coupled-pair functionals (CPFs) have been popular in the 1970s and 1980s and have yielded excellent results for small molecules. Recently, interest in CEPA and CPF methods has been renewed. It has been shown that these methods lead to competitive thermochemical, kinetic, and structural predictions. They greatly surpass second order Møller-Plesset and popular density functional theory based approaches in accuracy and are intermediate in quality between CCSD and CCSD(T) in extended benchmark studies. In this work an efficient production level implementation of the closed shell CEPA and CPF methods is reported that can be applied to medium sized molecules in the range of 50-100 atoms and up to about 2000 basis functions. The internal space is spanned by localized internal orbitals. The external space is greatly compressed through the method of pair natural orbitals (PNOs) that was also introduced by the pioneers of the CEPA approaches. Our implementation also makes extended use of density fitting (or resolution of the identity) techniques in order to speed up the laborious integral transformations. The method is called local pair natural orbital CEPA (LPNO-CEPA) (LPNO-CPF). The implementation is centered around the concepts of electron pairs and matrix operations. Altogether three cutoff parameters are introduced that control the size of the significant pair list, the average number of PNOs per electron pair, and the number of contributing basis functions per PNO. With the conservatively chosen default values of these thresholds, the method recovers about 99.8% of the canonical correlation energy. This translates to absolute deviations from the canonical result of only a few kcal mol-1. Extended numerical test calculations demonstrate that LPNO-CEPA (LPNO-CPF) has essentially the same accuracy as parent CEPA (CPF) methods for thermochemistry, kinetics, weak interactions, and potential energy surfaces but is up to 500
Frozen Gaussian approximation-based two-level methods for multi-frequency Schrödinger equation
NASA Astrophysics Data System (ADS)
Lorin, E.; Yang, X.
2016-10-01
In this paper, we develop two-level numerical methods for the time-dependent Schrödinger equation (TDSE) in multi-frequency regime. This work is motivated by attosecond science (Corkum and Krausz, 2007), which refers to the interaction of short and intense laser pulses with quantum particles generating wide frequency spectrum light, and allowing for the coherent emission of attosecond pulses (1 attosecond=10-18 s). The principle of the proposed methods consists in decomposing a wavefunction into a low/moderate frequency (quantum) contribution, and a high frequency contribution exhibiting a semi-classical behavior. Low/moderate frequencies are computed through the direct solution to the quantum TDSE on a coarse mesh, and the high frequency contribution is computed by frozen Gaussian approximation (Herman and Kluk, 1984). This paper is devoted to the derivation of consistent, accurate and efficient algorithms performing such a decomposition and the time evolution of the wavefunction in the multi-frequency regime. Numerical simulations are provided to illustrate the accuracy and efficiency of the derived algorithms.
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1993-01-01
The convergence characteristics of various approximate factorizations for the 3D Euler and Navier-Stokes equations are examined using the von-Neumann stability analysis method. Three upwind-difference based factorizations and several central-difference based factorizations are considered for the Euler equations. In the upwind factorizations both the flux-vector splitting methods of Steger and Warming and van Leer are considered. Analysis of the Navier-Stokes equations is performed only on the Beam and Warming central-difference scheme. The range of CFL numbers over which each factorization is stable is presented for one-, two-, and three-dimensional flow. Also presented for each factorization is the CFL number at which the maximum eigenvalue is minimized, for all Fourier components, as well as for the high frequency range only. The latter is useful for predicting the effectiveness of multigrid procedures with these schemes as smoothers. Further, local mode analysis is performed to test the suitability of using a uniform flow field in the stability analysis. Some inconsistencies in the results from previous analyses are resolved.
NASA Astrophysics Data System (ADS)
Rodrigue, Stephen Michael
Transport rates for the Kelvin-Stuart Cat Eyes driven flow are calculated using the lobe transport theory of Rom-Kedar and Wiggins through application of the Topological Approximation Method (TAM) developed by Rom-Kedar. Numerical studies by Ottino (1989) and Tsega, Michaelides, and Eschenazi (2001) of the driven or perturbed flow indicated frequency dependence of the transport. One goal of the present research is to derive an analytical expression for the transport and to study its dependence upon the perturbation frequency o. The Kelvin-Stuart Cat Eyes dynamical system consists of an infinite string of equivalent vortices exhibiting a 2pi spatial periodicity in x with an unperturbed streamfunction of H( x, y) = ln(cosh y + A cos x) - ln(1+A). The driven flow has perturbation terms of a sin(o) in both the x and y directions. Lobe dynamics transport theory states that transport occurs through the transfer of turnstile lobes, and that transport rates are equal to the area of the lobes transferred. Lobes may intersect, necessitating the calculation and removal of lobe intersection areas. The TAM requires the use of a Melnikov integral function, the zeroes of which locate the lobes, and a Whisker map (Chirikov 1979), which locates lobe intersection points. An analytical expression for the Melnikov integral function is derived for the Kelvin-Stuart Cat Eyes driven flow. Using the derived analytical Melnikov integral function, derived expressions for the periods of internal and external orbits as functions of H, and the Whisker map, the Topological Approximation Method is applied to the Kelvin-Stuart driven flow to calculate transport rates for a range of frequencies from (o = 1.21971 to o = 3.27532 as the structure index L is varied from L = 2 to L = 10. Transport rates per iteration, and cumulative transport per iteration, are calculated for 100 iterations for both internal and external lobes. The transport rates exhibit strong frequency dependence in the frequency
USDA-ARS?s Scientific Manuscript database
The ACCF90 computer program, which approximates reliability for animal models, was modified to estimate reliabilities for sire-maternal grandsire (MGS) models. Accuracy of the approximation was tested on a calving-ease data set for 2,968 bulls for which the inverse of the coefficient matrix could be...
Filobello-Nino, Uriel; Vazquez-Leal, Hector; Cervantes-Perez, Juan; Benhammouda, Brahim; Perez-Sesma, Agustin; Hernandez-Martinez, Luis; Jimenez-Fernandez, Victor Manuel; Herrera-May, Agustin Leobardo; Pereyra-Diaz, Domitilo; Marin-Hernandez, Antonio; Huerta Chua, Jesus
2014-01-01
This article proposes Laplace Transform Homotopy Perturbation Method (LT-HPM) to find an approximate solution for the problem of an axisymmetric Newtonian fluid squeezed between two large parallel plates. After comparing figures between approximate and exact solutions, we will see that the proposed solutions besides of handy, are highly accurate and therefore LT-HPM is extremely efficient.
Approximation of periodic functions in the classes H{sub q}{sup {Omega}} by linear methods
Pustovoitov, Nikolai N
2012-01-31
The following result is proved: if approximations in the norm of L{sub {infinity}} (of H{sub 1}) of functions in the classes H{sub {infinity}}{sup {Omega}} (in H{sub 1}{sup {Omega}}, respectively) by some linear operators have the same order of magnitude as the best approximations, then the set of norms of these operators is unbounded. Also Bernstein's and the Jackson-Nikol'skii inequalities are proved for trigonometric polynomials with spectra in the sets Q(N) (in {Gamma}(N,{Omega})). Bibliography: 15 titles.
NASA Astrophysics Data System (ADS)
Yin, George; Wang, Le Yi; Zhang, Hongwei
2014-12-01
Stochastic approximation methods have found extensive and diversified applications. Recent emergence of networked systems and cyber-physical systems has generated renewed interest in advancing stochastic approximation into a general framework to support algorithm development for information processing and decisions in such systems. This paper presents a survey on some recent developments in stochastic approximation methods and their applications. Using connected vehicles in platoon formation and coordination as a platform, we highlight some traditional and new methodologies of stochastic approximation algorithms and explain how they can be used to capture essential features in networked systems. Distinct features of networked systems with randomly switching topologies, dynamically evolving parameters, and unknown delays are presented, and control strategies are provided.
Yin, George; Wang, Le Yi; Zhang, Hongwei
2014-12-10
Stochastic approximation methods have found extensive and diversified applications. Recent emergence of networked systems and cyber-physical systems has generated renewed interest in advancing stochastic approximation into a general framework to support algorithm development for information processing and decisions in such systems. This paper presents a survey on some recent developments in stochastic approximation methods and their applications. Using connected vehicles in platoon formation and coordination as a platform, we highlight some traditional and new methodologies of stochastic approximation algorithms and explain how they can be used to capture essential features in networked systems. Distinct features of networked systems with randomly switching topologies, dynamically evolving parameters, and unknown delays are presented, and control strategies are provided.
NASA Astrophysics Data System (ADS)
Sarwar, S.; Rashidi, M. M.
2016-07-01
This paper deals with the investigation of the analytical approximate solutions for two-term fractional-order diffusion, wave-diffusion, and telegraph equations. The fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals [0,1], (1,2), and [1,2], respectively. In this paper, we extended optimal homotopy asymptotic method (OHAM) for two-term fractional-order wave-diffusion equations. Highly approximate solution is obtained in series form using this extended method. Approximate solution obtained by OHAM is compared with the exact solution. It is observed that OHAM is a prevailing and convergent method for the solutions of nonlinear-fractional-order time-dependent partial differential problems. The numerical results rendering that the applied method is explicit, effective, and easy to use, for handling more general fractional-order wave diffusion, diffusion, and telegraph problems.
Rogers, J.; Porter, K.
2012-03-01
This paper updates previous work that describes time period-based and other approximation methods for estimating the capacity value of wind power and extends it to include solar power. The paper summarizes various methods presented in utility integrated resource plans, regional transmission organization methodologies, regional stakeholder initiatives, regulatory proceedings, and academic and industry studies. Time period-based approximation methods typically measure the contribution of a wind or solar plant at the time of system peak - sometimes over a period of months or the average of multiple years.
NASA Astrophysics Data System (ADS)
Liu, Q.; Liu, F.; Turner, I.; Anh, V.
2007-03-01
In this paper we present a random walk model for approximating a Lévy-Feller advection-dispersion process, governed by the Lévy-Feller advection-dispersion differential equation (LFADE). We show that the random walk model converges to LFADE by use of a properly scaled transition to vanishing space and time steps. We propose an explicit finite difference approximation (EFDA) for LFADE, resulting from the Grünwald-Letnikov discretization of fractional derivatives. As a result of the interpretation of the random walk model, the stability and convergence of EFDA for LFADE in a bounded domain are discussed. Finally, some numerical examples are presented to show the application of the present technique.
NASA Technical Reports Server (NTRS)
Barnwell, R. W.; Davis, R. M.
1975-01-01
A user's manual is presented for a computer program which calculates inviscid flow about lifting configurations in the free-stream Mach-number range from zero to low supersonic. Angles of attack of the order of the configuration thickness-length ratio and less can be calculated. An approximate formulation was used which accounts for shock waves, leading-edge separation and wind-tunnel wall effects.
NASA Astrophysics Data System (ADS)
Espinoza-Ojeda, O. M.; Santoyo, E.; Andaverde, J.
2011-06-01
Approximate and rigorous solutions of seven heat transfer models were statistically examined, for the first time, to estimate stabilized formation temperatures (SFT) of geothermal and petroleum boreholes. Constant linear and cylindrical heat source models were used to describe the heat flow (either conductive or conductive/convective) involved during a borehole drilling. A comprehensive statistical assessment of the major error sources associated with the use of these models was carried out. The mathematical methods (based on approximate and rigorous solutions of heat transfer models) were thoroughly examined by using four statistical analyses: (i) the use of linear and quadratic regression models to infer the SFT; (ii) the application of statistical tests of linearity to evaluate the actual relationship between bottom-hole temperatures and time function data for each selected method; (iii) the comparative analysis of SFT estimates between the approximate and rigorous predictions of each analytical method using a β ratio parameter to evaluate the similarity of both solutions, and (iv) the evaluation of accuracy in each method using statistical tests of significance, and deviation percentages between 'true' formation temperatures and SFT estimates (predicted from approximate and rigorous solutions). The present study also enabled us to determine the sensitivity parameters that should be considered for a reliable calculation of SFT, as well as to define the main physical and mathematical constraints where the approximate and rigorous methods could provide consistent SFT estimates.
Hermeline, F. )
1993-05-01
This paper deals with the approximation of Vlasov-Poisson and Vlasov-Maxwell equations. We present two coupled particle-finite volume methods which use the properties of Delaunay-Voronoi meshes. These methods are applied to benchmark calculations and engineering problems such as simulation of electron injector devices. 42 refs., 13 figs.
NASA Astrophysics Data System (ADS)
Predescu, Cristian
2004-05-01
In this paper I provide significant mathematical evidence in support of the existence of direct short-time approximations of any polynomial order for the computation of density matrices of physical systems described by arbitrarily smooth and bounded from below potentials. While for Theorem 2, which is “experimental,” I only provide a “physicist’s” proof, I believe the present development is mathematically sound. As a verification, I explicitly construct two short-time approximations to the density matrix having convergence orders 3 and 4, respectively. Furthermore, in Appendix B, I derive the convergence constant for the trapezoidal Trotter path integral technique. The convergence orders and constants are then verified by numerical simulations. While the two short-time approximations constructed are of sure interest to physicists and chemists involved in Monte Carlo path integral simulations, the present paper is also aimed at the mathematical community, who might find the results interesting and worth exploring. I conclude the paper by discussing the implications of the present findings with respect to the solvability of the dynamical sign problem appearing in real-time Feynman path integral simulations.
Alcock, J. . Dept. of Environmental Science); Wagner, M.E. . Geology); Srogi, L.A. . Dept. of Geology and Astronomy)
1993-03-01
Post-Taconian transcurrent faulting in the Appalachian Piedmont presents a significant problem to workers attempting to reconstruct the Early Paleozoic tectonic history. One solution to the problem is to identify blocks that lie between zones of transcurrent faulting and that retain the Early Paleozoic arrangement of litho-tectonic units. The authors propose that a comparison of metamorphic histories of different units can be used to recognize blocks of this type. The Wilmington Complex (WC) arc terrane, the pre-Taconian Laurentian margin rocks (LM) exposed in basement-cored massifs, and the Wissahickon Group metapelites (WS) that lie between them are three litho-tectonic units in the PA-DE Piedmont that comprise a block assembled in the Early Paleozoic. Evidence supporting this interpretation includes: (1) Metamorphic and lithologic differences across the WC-WS contact and detailed geologic mapping of the contact that suggest thrusting of the WC onto the WS; (2) A metamorphic gradient in the WS with highest grade, including spinel-cordierite migmatites, adjacent to the WC indicating that peak metamorphism of the WS resulted from heating by the WC; (3) A metamorphic discontinuity at the WS-LM contact, evidence for emplacement of the WS onto the LM after WS peak metamorphism; (4) A correlation of mineral assemblage in the Cockeysville Marble of the LM with distance from the WS indicating that peak metamorphism of the LM occurred after emplacement of the WS; and (5) Early Paleozoic lower intercept zircon ages for the LM that are interpreted to date Taconian regional metamorphism. Analysis of metamorphism and its timing relative to thrusting suggest that the WS was associated with the WC before the WS was emplaced onto the LM during the Taconian. It follows that these units form a block that has not been significantly disrupted by later transcurrent shear.
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1985-01-01
Rayleigh-Ritz methods for the approximation of the natural modes for a class of vibration problems involving flexible beams with tip bodies using subspaces of piecewise polynomial spline functions are developed. An abstract operator theoretic formulation of the eigenvalue problem is derived and spectral properties investigated. The existing theory for spline-based Rayleigh-Ritz methods applied to elliptic differential operators and the approximation properties of interpolatory splines are useed to argue convergence and establish rates of convergence. An example and numerical results are discussed.
NASA Astrophysics Data System (ADS)
Gerber, Paul R.; Mark, Alan E.; van Gunsteren, Wilfred F.
1993-06-01
Derivatives of free energy differences have been calculated by molecular dynamics techniques. The systems under study were ternary complexes of Trimethoprim (TMP) with dihydrofolate reductases of E. coli and chicken liver, containing the cofactor NADPH. Derivatives are taken with respect to modification of TMP, with emphasis on altering the 3-, 4- and 5-substituents of the phenyl ring. A linear approximation allows the encompassing of a whole set of modifications in a single simulation, as opposed to a full perturbation calculation, which requires a separate simulation for each modification. In the case considered here, the proposed technique requires a factor of 1000 less computing effort than a full free energy perturbation calculation. For the linear approximation to yield a significant result, one has to find ways of choosing the perturbation evolution, such that the initial trend mirrors the full calculation. The generation of new atoms requires a careful treatment of the singular terms in the non-bonded interaction. The result can be represented by maps of the changed molecule, which indicate whether complex formation is favoured under movement of partial charges and change in atom polarizabilities. Comparison with experimental measurements of inhibition constants reveals fair agreement in the range of values covered. However, detailed comparison fails to show a significant correlation. Possible reasons for the most pronounced deviations are given.
NASA Astrophysics Data System (ADS)
Meier, Patrick; Rauhut, Guntram
2015-12-01
Three different approaches for calculating Franck-Condon factors beyond the harmonic approximation are compared and discussed in detail. Duschinsky effects are accounted for either by a rotation of the initial or final wavefunctions - which are obtained from state-specific configuration-selective vibrational configuration interaction calculations - or by a rotation of the underlying multi-dimensional potential energy surfaces being determined from explicitly correlated coupled-cluster approaches. An analysis of the Duschinsky effects in dependence on the rotational angles and the anisotropy of the wavefunction is provided. Benchmark calculations for the photoelectron spectra of ClO2, HS-2 and ZnOH- are presented. An application of the favoured approach for calculating Franck-Condon factors to the oxidation of Zn(H2O)+ and Zn2(H2O)+ demonstrates its applicability to systems with more than three atoms.
A B-Spline-Based Colocation Method to Approximate the Solutions to the Equations of Fluid Dynamics
M. D. Landon; R. W. Johnson
1999-07-01
The potential of a B-spline collocation method for numerically solving the equations of fluid dynamics is discussed. It is known that B-splines can resolve complex curves with drastically fewer data than can their standard shape function counterparts. This feature promises to allow much faster numerical simulations of fluid flow than standard finite volume/finite element methods without sacrificing accuracy. An example channel flow problem is solved using the method.
A B-Spline-Based Colocation Method to Approximate the Solutions to the Equations of Fluid Dynamics
Johnson, Richard Wayne; Landon, Mark Dee
1999-07-01
The potential of a B-spline collocation method for numerically solving the equations of fluid dynamics is discussed. It is known that B-splines can resolve curves with drastically fewer data than can their standard shape function counterparts. This feature promises to allow much faster numerical simulations of fluid flow than standard finite volume/finite element methods without sacrificing accuracy. An example channel flow problem is solved using the method.
NASA Technical Reports Server (NTRS)
Jones, Alun R.
1940-01-01
This report has been prepare in response to a request for information from an aircraft company. A typical example was selected for the presentation of an approximate method of calculation of the relative humidity required to prevent frosting on the inside of a plastic window in a pressure type cabin on a high speed airplane. The results of the study are reviewed.
NASA Astrophysics Data System (ADS)
Barry, D. A.; Parlange, J.-Y.; Li, L.; Jeng, D.-S.; Crapper, M.
2005-10-01
The solution to the Green and Ampt infiltration equation is expressible in terms of the Lambert W-1 function. Approximations for Green and Ampt infiltration are thus derivable from approximations for the W-1 function and vice versa. An infinite family of asymptotic expansions to W-1 is presented. Although these expansions do not converge near the branch point of the W function (corresponds to Green-Ampt infiltration with immediate ponding), a method is presented for approximating W-1 that is exact at the branch point and asymptotically, with interpolation between these limits. Some existing and several new simple and compact yet robust approximations applicable to Green-Ampt infiltration and flux are presented, the most accurate of which has a maximum relative error of 5 × 10 -5%. This error is orders of magnitude lower than any existing analytical approximations.
Gai, Litao; Bilige, Sudao; Jie, Yingmo
2016-01-01
In this paper, we successfully obtained the exact solutions and the approximate analytic solutions of the (2 + 1)-dimensional KP equation based on the Lie symmetry, the extended tanh method and the homotopy perturbation method. In first part, we obtained the symmetries of the (2 + 1)-dimensional KP equation based on the Wu-differential characteristic set algorithm and reduced it. In the second part, we constructed the abundant exact travelling wave solutions by using the extended tanh method. These solutions are expressed by the hyperbolic functions, the trigonometric functions and the rational functions respectively. It should be noted that when the parameters are taken as special values, some solitary wave solutions are derived from the hyperbolic function solutions. Finally, we apply the homotopy perturbation method to obtain the approximate analytic solutions based on four kinds of initial conditions.
NASA Astrophysics Data System (ADS)
Kopka, Piotr; Wawrzynczak, Anna; Borysiewicz, Mieczyslaw
2016-11-01
In this paper the Bayesian methodology, known as Approximate Bayesian Computation (ABC), is applied to the problem of the atmospheric contamination source identification. The algorithm input data are on-line arriving concentrations of the released substance registered by the distributed sensors network. This paper presents the Sequential ABC algorithm in detail and tests its efficiency in estimation of probabilistic distributions of atmospheric release parameters of a mobile contamination source. The developed algorithms are tested using the data from Over-Land Atmospheric Diffusion (OLAD) field tracer experiment. The paper demonstrates estimation of seven parameters characterizing the contamination source, i.e.: contamination source starting position (x,y), the direction of the motion of the source (d), its velocity (v), release rate (q), start time of release (ts) and its duration (td). The online-arriving new concentrations dynamically update the probability distributions of search parameters. The atmospheric dispersion Second-order Closure Integrated PUFF (SCIPUFF) Model is used as the forward model to predict the concentrations at the sensors locations.
Daly, Aidan C; Gavaghan, David J; Holmes, Chris; Cooper, Jonathan
2015-12-01
As cardiac cell models become increasingly complex, a correspondingly complex 'genealogy' of inherited parameter values has also emerged. The result has been the loss of a direct link between model parameters and experimental data, limiting both reproducibility and the ability to re-fit to new data. We examine the ability of approximate Bayesian computation (ABC) to infer parameter distributions in the seminal action potential model of Hodgkin and Huxley, for which an immediate and documented connection to experimental results exists. The ability of ABC to produce tight posteriors around the reported values for the gating rates of sodium and potassium ion channels validates the precision of this early work, while the highly variable posteriors around certain voltage dependency parameters suggests that voltage clamp experiments alone are insufficient to constrain the full model. Despite this, Hodgkin and Huxley's estimates are shown to be competitive with those produced by ABC, and the variable behaviour of posterior parametrized models under complex voltage protocols suggests that with additional data the model could be fully constrained. This work will provide the starting point for a full identifiability analysis of commonly used cardiac models, as well as a template for informative, data-driven parametrization of newly proposed models.
Daly, Aidan C.; Holmes, Chris
2015-01-01
As cardiac cell models become increasingly complex, a correspondingly complex ‘genealogy’ of inherited parameter values has also emerged. The result has been the loss of a direct link between model parameters and experimental data, limiting both reproducibility and the ability to re-fit to new data. We examine the ability of approximate Bayesian computation (ABC) to infer parameter distributions in the seminal action potential model of Hodgkin and Huxley, for which an immediate and documented connection to experimental results exists. The ability of ABC to produce tight posteriors around the reported values for the gating rates of sodium and potassium ion channels validates the precision of this early work, while the highly variable posteriors around certain voltage dependency parameters suggests that voltage clamp experiments alone are insufficient to constrain the full model. Despite this, Hodgkin and Huxley's estimates are shown to be competitive with those produced by ABC, and the variable behaviour of posterior parametrized models under complex voltage protocols suggests that with additional data the model could be fully constrained. This work will provide the starting point for a full identifiability analysis of commonly used cardiac models, as well as a template for informative, data-driven parametrization of newly proposed models. PMID:27019736
de Stadler, M; Chand, K
2007-11-12
Gas centrifuges exhibit very complex flows. Within the centrifuge there is a rarefied region, a transition region, and a region with an extreme density gradient. The flow moves at hypersonic speeds and shock waves are present. However, the flow is subsonic in the axisymmetric plane. The analysis may be simplified by treating the flow as a perturbation of wheel flow. Wheel flow implies that the fluid is moving as a solid body. With the very large pressure gradient, the majority of the fluid is located very close to the rotor wall and moves at an azimuthal velocity proportional to its distance from the rotor wall; there is no slipping in the azimuthal plane. The fluid can be modeled as incompressible and subsonic in the axisymmetric plane. By treating the centrifuge as long, end effects can be appropriately modeled without performing a detailed boundary layer analysis. Onsager's pancake approximation is used to construct a simulation to model fluid flow in a gas centrifuge. The governing 6th order partial differential equation is broken down into an equivalent coupled system of three equations and then solved numerically. In addition to a discussion on the baseline solution, known problems and future work possibilities are presented.
Berke, Ethan M; Shi, Xun
2009-04-29
Travel time is an important metric of geographic access to health care. We compared strategies of estimating travel times when only subject ZIP code data were available. Using simulated data from New Hampshire and Arizona, we estimated travel times to nearest cancer centers by using: 1) geometric centroid of ZIP code polygons as origins, 2) population centroids as origin, 3) service area rings around each cancer center, assigning subjects to rings by assuming they are evenly distributed within their ZIP code, 4) service area rings around each center, assuming the subjects follow the population distribution within the ZIP code. We used travel times based on street addresses as true values to validate estimates. Population-based methods have smaller errors than geometry-based methods. Within categories (geometry or population), centroid and service area methods have similar errors. Errors are smaller in urban areas than in rural areas. Population-based methods are superior to the geometry-based methods, with the population centroid method appearing to be the best choice for estimating travel time. Estimates in rural areas are less reliable.
NASA Astrophysics Data System (ADS)
Hayashi, Nobuhiko; Nagai, Yuki; Higashi, Yoichi
2010-12-01
We theoretically discuss the magnetic-field-angle dependence of the zero-energy density of states (ZEDOS) in superconductors. Point-node and line-node superconducting gaps on spherical and cylindrical Fermi surfaces are considered. The Doppler-shift (DS) method and the Kramer-Pesch approximation (KPA) are used to calculate the ZEDOS. Numerical results show that consequences of the DS method are corrected by the KPA.
Moilanen, Atte; Wintle, Brendan A
2007-04-01
Aggregation of reserve networks is generally considered desirable for biological and economic reasons: aggregation reduces negative edge effects and facilitates metapopulation dynamics, which plausibly leads to improved persistence of species. Economically, aggregated networks are less expensive to manage than fragmented ones. Therefore, many reserve-design methods use qualitative heuristics, such as distance-based criteria or boundary-length penalties to induce reserve aggregation. We devised a quantitative method that introduces aggregation into reserve networks. We call the method the boundary-quality penalty (BQP) because the biological value of a land unit (grid cell) is penalized when the unit occurs close enough to the edge of a reserve such that a fragmentation or edge effect would reduce population densities in the reserved cell. The BQP can be estimated for any habitat model that includes neighborhood (connectivity) effects, and it can be introduced into reserve selection software in a standardized manner. We used the BQP in a reserve-design case study of the Hunter Valley of southeastern Australia. The BQP resulted in a more highly aggregated reserve network structure. The degree of aggregation required was specified by observed (albeit modeled) biological responses to fragmentation. Estimating the effects of fragmentation on individual species and incorporating estimated effects in the objective function of reserve-selection algorithms is a coherent and defensible way to select aggregated reserves. We implemented the BQP in the context of the Zonation method, but it could as well be implemented into any other spatially explicit reserve-planning framework.
Covariant approximation averaging
NASA Astrophysics Data System (ADS)
Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2015-06-01
We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.
Approximating Integrals Using Probability
ERIC Educational Resources Information Center
Maruszewski, Richard F., Jr.; Caudle, Kyle A.
2005-01-01
As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…
Approximating Integrals Using Probability
ERIC Educational Resources Information Center
Maruszewski, Richard F., Jr.; Caudle, Kyle A.
2005-01-01
As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…
NASA Technical Reports Server (NTRS)
Jordon, D. E.; Patterson, W.; Sandlin, D. R.
1985-01-01
The XV-15 Tilt Rotor Research Aircraft download phenomenon was analyzed. This phenomenon is a direct result of the two rotor wakes impinging on the wing upper surface when the aircraft is in the hover configuration. For this study the analysis proceeded along tow lines. First was a method whereby results from actual hover tests of the XV-15 aircraft were combined with drag coefficient results from wind tunnel tests of a wing that was representative of the aircraft wing. Second, an analytical method was used that modeled that airflow caused gy the two rotors. Formulas were developed in such a way that acomputer program could be used to calculate the axial velocities were then used in conjunction with the aforementioned wind tunnel drag coefficinet results to produce download values. An attempt was made to validate the analytical results by modeling a model rotor system for which direct download values were determinrd..
Karagiannis, Georgios Lin, Guang
2014-02-15
Generalized polynomial chaos (gPC) expansions allow us to represent the solution of a stochastic system using a series of polynomial chaos basis functions. The number of gPC terms increases dramatically as the dimension of the random input variables increases. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs when the corresponding deterministic solver is computationally expensive, evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solutions, in both spatial and random domains, by coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spatial points, via (1) the Bayesian model average (BMA) or (2) the median probability model, and their construction as spatial functions on the spatial domain via spline interpolation. The former accounts for the model uncertainty and provides Bayes-optimal predictions; while the latter provides a sparse representation of the stochastic solutions by evaluating the expansion on a subset of dominating gPC bases. Moreover, the proposed methods quantify the importance of the gPC bases in the probabilistic sense through inclusion probabilities. We design a Markov chain Monte Carlo (MCMC) sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed methods are suitable for, but not restricted to, problems whose stochastic solutions are sparse in the stochastic space with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the accuracy and performance of the proposed methods and make comparisons with other approaches on solving elliptic SPDEs with 1-, 14- and 40-random dimensions.
Karagiannis, Georgios; Lin, Guang
2014-02-15
Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.
NASA Astrophysics Data System (ADS)
Lorin, E.; Yang, X.; Antoine, X.
2016-06-01
The paper is devoted to develop efficient domain decomposition methods for the linear Schrödinger equation beyond the semiclassical regime, which does not carry a small enough rescaled Planck constant for asymptotic methods (e.g. geometric optics) to produce a good accuracy, but which is too computationally expensive if direct methods (e.g. finite difference) are applied. This belongs to the category of computing middle-frequency wave propagation, where neither asymptotic nor direct methods can be directly used with both efficiency and accuracy. Motivated by recent works of the authors on absorbing boundary conditions (Antoine et al. (2014) [13] and Yang and Zhang (2014) [43]), we introduce Semiclassical Schwarz Waveform Relaxation methods (SSWR), which are seamless integrations of semiclassical approximation to Schwarz Waveform Relaxation methods. Two versions are proposed respectively based on Herman-Kluk propagation and geometric optics, and we prove the convergence and provide numerical evidence of efficiency and accuracy of these methods.
NASA Astrophysics Data System (ADS)
Huang, H.; Meng, D. Q.; Lai, X. C.; Liu, T. W.; Long, Y.; Hu, Q. M.
2014-08-01
The combined interatomic pair potentials of TiZrNi, including Morse and Inversion Gaussian, are successfully built by the lattice inversion method. Some experimental controversies on atomic occupancies of sites 6-8 in W-TiZrNi are analyzed and settled with these inverted potentials. According to the characteristics of composition and site preference occupancy of W-TiZrNi, two stable structural models of W-TiZrNi are proposed and the possibilities are partly confirmed by experimental data. The stabilities of W-TiZrNi mostly result from the contribution of Zr atoms to the phonon densities of states in lower frequencies.
NASA Astrophysics Data System (ADS)
Warner, Paul
2017-09-01
For any appreciable radiation source, such as a nuclear reactor core or radiation physics accelerator, there will be the safety requirement to shield operators from the effects of the radiation from the source. Both the size and weight of the shield need to be minimised to reduce costs (and to increase the space available for the maintenance envelope on a plant). This needs to be balanced against legal radiation dose safety limits and the requirement to reduce the dose to operators As Low As Reasonably Practicable (ALARP). This paper describes a method that can be used, early in a shield design, to scope the design and provide a practical estimation of the size of the shield by optimising the shield internals. In particular, a theoretical model representative of a small reactor is used to demonstrate that the primary shielding radius, thickness of the primary shielding inner wall and the thicknesses of two steel inner walls, can be set using the Lagrange multiplier method with a constraint on the total flux on the outside of the shielding. The results from the optimisation are presented and an RZ finite element transport theory calculation is used to demonstrate that, using the optimised geometry, the constraint is achieved.
NASA Astrophysics Data System (ADS)
Vitanov, Nikolay K.
2011-03-01
We discuss the class of equations ∑i,j=0mAij(u){∂iu}/{∂ti}∂+∑k,l=0nBkl(u){∂ku}/{∂xk}∂=C(u) where Aij( u), Bkl( u) and C( u) are functions of u( x, t) as follows: (i) Aij, Bkl and C are polynomials of u; or (ii) Aij, Bkl and C can be reduced to polynomials of u by means of Taylor series for small values of u. For these two cases the above-mentioned class of equations consists of nonlinear PDEs with polynomial nonlinearities. We show that the modified method of simplest equation is powerful tool for obtaining exact traveling-wave solution of this class of equations. The balance equations for the sub-class of traveling-wave solutions of the investigated class of equations are obtained. We illustrate the method by obtaining exact traveling-wave solutions (i) of the Swift-Hohenberg equation and (ii) of the generalized Rayleigh equation for the cases when the extended tanh-equation or the equations of Bernoulli and Riccati are used as simplest equations.
NASA Astrophysics Data System (ADS)
Hashemi, M. S.; Baleanu, D.
2016-07-01
We propose a simple and accurate numerical scheme for solving the time fractional telegraph (TFT) equation within Caputo type fractional derivative. A fictitious coordinate ϑ is imposed onto the problem in order to transform the dependent variable u (x , t) into a new variable with an extra dimension. In the new space with the added fictitious dimension, a combination of method of line and group preserving scheme (GPS) is proposed to find the approximate solutions. This method preserves the geometric structure of the problem. Power and accuracy of this method has been illustrated through some examples of TFT equation.
NASA Astrophysics Data System (ADS)
Chen, Peng; Quarteroni, Alfio
2015-10-01
In this work we develop an adaptive and reduced computational algorithm based on dimension-adaptive sparse grid approximation and reduced basis methods for solving high-dimensional uncertainty quantification (UQ) problems. In order to tackle the computational challenge of "curse of dimensionality" commonly faced by these problems, we employ a dimension-adaptive tensor-product algorithm [16] and propose a verified version to enable effective removal of the stagnation phenomenon besides automatically detecting the importance and interaction of different dimensions. To reduce the heavy computational cost of UQ problems modelled by partial differential equations (PDE), we adopt a weighted reduced basis method [7] and develop an adaptive greedy algorithm in combination with the previous verified algorithm for efficient construction of an accurate reduced basis approximation. The efficiency and accuracy of the proposed algorithm are demonstrated by several numerical experiments.
Krause, Katharina; Klopper, Wim
2015-03-14
A generalization of the approximated coupled-cluster singles and doubles method and the algebraic diagrammatic construction scheme up to second order to two-component spinors obtained from a relativistic Hartree–Fock calculation is reported. Computational results for zero-field splittings of atoms and monoatomic cations, triplet lifetimes of two organic molecules, and the spin-forbidden part of the UV/Vis absorption spectrum of tris(ethylenediamine)cobalt(III) are presented.
Novaes, T F; Matos, R; Braga, M M; Imparato, J C P; Raggio, D P; Mendes, F M
2009-01-01
This in vivo study aimed to compare the performance of different methods of approximal caries detection in primary molars. Fifty children (aged 5-12 years) were selected, and 2 examiners evaluated 621 approximal surfaces of primary molars using: (a) visual inspection, (b) the radiographic method and (c) a pen-type laser fluorescence device (LFpen). As reference standard method, the teeth were separated using orthodontic rubbers during 7 days, and the surfaces were evaluated by 2 examiners for the presence of white spots or cavitations. The area under the receiver-operating characteristics curve (A(z)) as well as sensitivity, specificity and accuracy (percentage of correct diagnosis) were calculated and compared with the McNemar test at both thresholds. The interexaminer reproducibility was calculated using the intraclass correlation coefficient (ICC-absolute values) and the kappa test (dichotomizing for both thresholds). The ICC value of the reference standard procedure was 0.94. At white-spot threshold, no methods tested presented good performance (sensitivity: visual 0.20-0.21; radiographic 0.16-0.23; LFpen 0.16; specificity: visual 0.95; radiographic 0.99-1.00; LFpen 0.94-0.96). At cavitation threshold, both LFpen and radiographic methods demonstrated higher sensitivity (0.55-0.65 and 0.65-0.70, respectively) and A(z) (0.92 and 0.88-0.89, respectively) than visual inspection sensitivity (0.30) and A(z) (0.69-0.76). All methods presented high specificities (around 0.99) and similar ICCs, but the kappa value for LFpen at white-spot threshold was lower (0.44). In conclusion, both LFpen and radiographic methods present similar performance in detecting the presence of cavitations on approximal surfaces of primary molars. Copyright 2009 S. Karger AG, Basel.
Roze, Denis; Rousset, François
2003-01-01
Population structure affects the relative influence of selection and drift on the change in allele frequencies. Several models have been proposed recently, using diffusion approximations to calculate fixation probabilities, fixation times, and equilibrium properties of subdivided populations. We propose here a simple method to construct diffusion approximations in structured populations; it relies on general expressions for the expectation and variance in allele frequency change over one generation, in terms of partial derivatives of a "fitness function" and probabilities of genetic identity evaluated in a neutral model. In the limit of a very large number of demes, these probabilities can be expressed as functions of average allele frequencies in the metapopulation, provided that coalescence occurs on two different timescales, which is the case in the island model. We then use the method to derive expressions for the probability of fixation of new mutations, as a function of their dominance coefficient, the rate of partial selfing, and the rate of deme extinction. We obtain more precise approximations than those derived by recent work, in particular (but not only) when deme sizes are small. Comparisons with simulations show that the method gives good results as long as migration is stronger than selection. PMID:14704194
NASA Astrophysics Data System (ADS)
Chatterjee, Koushik; Pastorczak, Ewa; Jawulski, Konrad; Pernal, Katarzyna
2016-06-01
A perfect-pairing generalized valence bond (GVB) approximation is known to be one of the simplest approximations, which allows one to capture the essence of static correlation in molecular systems. In spite of its attractive feature of being relatively computationally efficient, this approximation misses a large portion of dynamic correlation and does not offer sufficient accuracy to be generally useful for studying electronic structure of molecules. We propose to correct the GVB model and alleviate some of its deficiencies by amending it with the correlation energy correction derived from the recently formulated extended random phase approximation (ERPA). On the examples of systems of diverse electronic structures, we show that the resulting ERPA-GVB method greatly improves upon the GVB model. ERPA-GVB recovers most of the electron correlation and it yields energy barrier heights of excellent accuracy. Thanks to a balanced treatment of static and dynamic correlation, ERPA-GVB stays reliable when one moves from systems dominated by dynamic electron correlation to those for which the static correlation comes into play.
NASA Astrophysics Data System (ADS)
Moraes Rêgo, Patrícia Helena; Viana da Fonseca Neto, João; Ferreira, Ernesto M.
2015-08-01
The main focus of this article is to present a proposal to solve, via UDUT factorisation, the convergence and numerical stability problems that are related to the covariance matrix ill-conditioning of the recursive least squares (RLS) approach for online approximations of the algebraic Riccati equation (ARE) solution associated with the discrete linear quadratic regulator (DLQR) problem formulated in the actor-critic reinforcement learning and approximate dynamic programming context. The parameterisations of the Bellman equation, utility function and dynamic system as well as the algebra of Kronecker product assemble a framework for the solution of the DLQR problem. The condition number and the positivity parameter of the covariance matrix are associated with statistical metrics for evaluating the approximation performance of the ARE solution via RLS-based estimators. The performance of RLS approximators is also evaluated in terms of consistence and polarisation when associated with reinforcement learning methods. The used methodology contemplates realisations of online designs for DLQR controllers that is evaluated in a multivariable dynamic system model.
Chatterjee, Koushik; Pastorczak, Ewa; Jawulski, Konrad; Pernal, Katarzyna
2016-06-28
A perfect-pairing generalized valence bond (GVB) approximation is known to be one of the simplest approximations, which allows one to capture the essence of static correlation in molecular systems. In spite of its attractive feature of being relatively computationally efficient, this approximation misses a large portion of dynamic correlation and does not offer sufficient accuracy to be generally useful for studying electronic structure of molecules. We propose to correct the GVB model and alleviate some of its deficiencies by amending it with the correlation energy correction derived from the recently formulated extended random phase approximation (ERPA). On the examples of systems of diverse electronic structures, we show that the resulting ERPA-GVB method greatly improves upon the GVB model. ERPA-GVB recovers most of the electron correlation and it yields energy barrier heights of excellent accuracy. Thanks to a balanced treatment of static and dynamic correlation, ERPA-GVB stays reliable when one moves from systems dominated by dynamic electron correlation to those for which the static correlation comes into play.
NASA Astrophysics Data System (ADS)
Nakano, Hiroshi; Sato, Hirofumi
2017-04-01
A new theoretical method to study electron transfer reactions in condensed phases is proposed by introducing the mean-field approximation into the constrained density functional theory/molecular mechanical method with a polarizable force field (CDFT/MMpol). The method enables us to efficiently calculate the statistically converged equilibrium and nonequilibrium free energies for diabatic states in an electron transfer reaction by virtue of the mean field approximation that drastically reduces the number of CDFT calculations. We apply the method to the system of a formanilide-anthraquinone dyad in dimethylsulfoxide, in which charge recombination and cis-trans isomerization reactions can take place, previously studied by the CDFT/MMpol method. Quantitative agreement of the driving force and the reorganization energy between our results and those from the CDFT/MMpol calculation and the experimental estimates supports the utility of our method. The calculated nonequilibrium free energy is analyzed by its decomposition into several contributions such as those from the averaged solute-solvent electrostatic interactions and the explicit solvent electronic polarization. The former contribution is qualitatively well described by a model composed of a coarse-grained dyad in a solution in the linear response regime. The latter contribution reduces the reorganization energy by more than 10 kcal/mol.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.
Jaiyong, Panichakorn; Bryce, Richard A
2017-06-14
Noncovalent functionalization of graphene by carbohydrates such as β-cyclodextrin (βCD) has the potential to improve graphene dispersibility and its use in biomedical applications. Here we explore the ability of approximate quantum chemical methods to accurately model βCD conformation and its interaction with graphene. We find that DFTB3, SCC-DFTB and PM3CARB-1 methods provide the best agreement with density functional theory (DFT) in calculation of relative energetics of gas-phase βCD conformers; however, the remaining NDDO-based approaches we considered underestimate the stability of the trans,gauche vicinal diol conformation. This diol orientation, corresponding to a clockwise hydrogen bonding arrangement in the glucosyl residue of βCD, is present in the lowest energy βCD conformer. Consequently, for adsorption on graphene of clockwise or counterclockwise hydrogen bonded forms of βCD, calculated with respect to this unbound conformer, the DFTB3 method provides closer agreement with DFT values than PM7 and PM6-DH2 approaches. These findings suggest approximate quantum chemical methods as potentially useful tools to guide the design of carbohydrate-graphene interactions, but also highlights the specific challenge to NDDO-based methods in capturing the relative energetics of carbohydrate hydrogen bond networks.
NASA Astrophysics Data System (ADS)
Mohammadpour, Mozhdeh; Jamshidi, Zahra
2016-05-01
The prospect of challenges in reproducing and interpretation of resonance Raman properties of molecules interacting with metal clusters has prompted the present research initiative. Resonance Raman spectra based on the time-dependent gradient approximation are examined in the framework of density functional theory using different methods for representing the exchange-correlation functional. In this work the performance of different XC functionals in the prediction of ground state properties, excitation state energies, and gradients are compared and discussed. Resonance Raman properties based on time-dependent gradient approximation for the strongly low-lying charge transfer states are calculated and compared for different methods. We draw the following conclusions: (1) for calculating the binding energy and ground state geometry, dispersion-corrected functionals give the best performance in comparison to ab initio calculations, (2) GGA and meta GGA functionals give good accuracy in calculating vibrational frequencies, (3) excited state energies determined by hybrid and range-separated hybrid functionals are in good agreement with EOM-CCSD calculations, and (4) in calculating resonance Raman properties GGA functionals give good and reasonable performance in comparison to the experiment; however, calculating the excited state gradient by using the hybrid functional on the hessian of GGA improves the results of the hybrid functional significantly. Finally, we conclude that the agreement of charge-transfer surface enhanced resonance Raman spectra with experiment is improved significantly by using the excited state gradient approximation.
NASA Astrophysics Data System (ADS)
Voitsekhovskaya, O. K.; Egorov, O. V.; Kashirskii, D. E.; Emel'yanov, N. M.
2017-09-01
An advanced method of approximating polynomials for simultaneous determination of the temperature and concentration of a hot gas from its spectral characteristics is presented. The technique has been approved with application of the most correct measurements of carbon dioxide transmission function at temperatures 500-1770 K and partial pressures ρ_{{CO}_2} = 0.17-1 atm. An arbitrary number (≥2) of spectral centers is used to solve unambiguously the inverse optical problem for the transmission function in the measured spectral region. The influence of the value of the transmission function on its approximation error by the polynomial of a fixed degree is analyzed. Dependences of errors in determining the temperature and concentration of carbon dioxide on the values of its transmission function and the number of the employed spectral centers are obtained. The accuracy of determining experimental values of the thermodynamic parameters with allowance for the error of measuring the transmission function is increased.
Advanced Methods of Approximate Reasoning
1990-11-30
possibilistic or " fuzzy " logic. Using a conceptual framework, previously employed to explain the meaning of the Dempster-Shafer calculus of I evidence (i.e...explanations for possibilistic constructs on the basis of previously existing notions rather than generalizations of modal frameworks by means of fuzzy ...the near future: 5 1. Control of unstable systems. such as helicopters, land vehicles, or weapon platforn. by means of possibilistic control
NASA Technical Reports Server (NTRS)
Anderson, O. L.; Briley, W. R.; Mcdonald, H.
1978-01-01
An approximate analysis is presented for calculating three-dimensional, low Mach number, laminar viscous flows in curved passages with large secondary flows and corner boundary layers. The analysis is based on the decomposition of the overall velocity field into inviscid and viscous components with the overall velocity being determined from superposition. An incompressible vorticity transport equation is used to estimate inviscid secondary flow velocities to be used as corrections to the potential flow velocity field. A parabolized streamwise momentum equation coupled to an adiabatic energy equation and global continuity equation is used to obtain an approximate viscous correction to the pressure and longitudinal velocity fields. A collateral flow assumption is invoked to estimate the viscous correction to the transverse velocity fields. The approximate analysis is solved numerically using an implicit ADI solution for the viscous pressure and velocity fields. An iterative ADI procedure is used to solve for the inviscid secondary vorticity and velocity fields. This method was applied to computing the flow within a turbine vane passage with inlet flow conditions of M = 0.1 and M = 0.25, Re = 1000 and adiabatic walls, and for a constant radius curved rectangular duct with R/D = 12 and 14 and with inlet flow conditions of M = 0.1, Re = 1000, and adiabatic walls.
Taylor Approximations and Definite Integrals
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2007-01-01
We investigate the possibility of approximating the value of a definite integral by approximating the integrand rather than using numerical methods to approximate the value of the definite integral. Particular cases considered include examples where the integral is improper, such as an elliptic integral. (Contains 4 tables and 2 figures.)
Taylor Approximations and Definite Integrals
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2007-01-01
We investigate the possibility of approximating the value of a definite integral by approximating the integrand rather than using numerical methods to approximate the value of the definite integral. Particular cases considered include examples where the integral is improper, such as an elliptic integral. (Contains 4 tables and 2 figures.)
Thorn, Graeme J; King, John R
2016-01-01
The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Hunter, Craig A.
1995-01-01
An analytical/numerical method has been developed to predict the static thrust performance of non-axisymmetric, two-dimensional convergent-divergent exhaust nozzles. Thermodynamic nozzle performance effects due to over- and underexpansion are modeled using one-dimensional compressible flow theory. Boundary layer development and skin friction losses are calculated using an approximate integral momentum method based on the classic karman-Polhausen solution. Angularity effects are included with these two models in a computational Nozzle Performance Analysis Code, NPAC. In four different case studies, results from NPAC are compared to experimental data obtained from subscale nozzle testing to demonstrate the capabilities and limitations of the NPAC method. In several cases, the NPAC prediction matched experimental gross thrust efficiency data to within 0.1 percent at a design NPR, and to within 0.5 percent at off-design conditions.
Ribeiro, Apoena A; Purger, Flávia; Rodrigues, Jonas A; Oliveira, Patrícia R A; Lussi, Adrian; Monteiro, Antonio Henrique; Alves, Haimon D L; Assis, Joaquim T; Vasconcellos, Adalberto B
2015-01-01
This in vivo study aimed to evaluate the influence of contact points on the approximal caries detection in primary molars, by comparing the performance of the DIAGNOdent pen and visual-tactile examination after tooth separation to bitewing radiography (BW). A total of 112 children were examined and 33 children were selected. In three periods (a, b, and c), 209 approximal surfaces were examined: (a) examiner 1 performed visual-tactile examination using the Nyvad criteria (EX1); examiner 2 used DIAGNOdent pen (LF1) and took BW; (b) 1 week later, after tooth separation, examiner 1 performed the second visual-tactile examination (EX2) and examiner 2 used DIAGNOdent again (LF2); (c) after tooth exfoliation, surfaces were directly examined using DIAGNOdent (LF3). Teeth were examined by computed microtomography as a reference standard. Analyses were based on diagnostic thresholds: D1: D 0 = health, D 1 –D 4 = disease; D2: D 0 , D 1 = health, D 2 –D 4 = disease; D3: D 0 –D 2 = health, D 3 , D 4 = disease. At D1, the highest sensitivity/specificity were observed for EX1 (1.00)/LF3 (0.68), respectively. At D2, the highest sensitivity/ specificity were observed for LF3 (0.69)/BW (1.00), respectively. At D3, the highest sensitivity/specificity were observed for LF3 (0.78)/EX1, EX2 and BW (1.00). EX1 showed higher accuracy values than LF1, and EX2 showed similar values to LF2. We concluded that the visual-tactile examination showed better results in detecting sound surfaces and approximal caries lesions without tooth separation. However, the effectiveness of approximal caries lesion detection of both methods was increased by the absence of contact points. Therefore, regardless of the method of detection, orthodontic separating elastics should be used as a complementary tool for the diagnosis of approximal noncavitated lesions in primary molars.
Combining global and local approximations
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
1991-01-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.
Combining global and local approximations
Haftka, R.T. )
1991-09-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model. 6 refs.
NASA Technical Reports Server (NTRS)
Buglia, James J.; Young, George R.; Timmons, Jesse D.; Brinkworth, Helen S.
1961-01-01
An analytical method has been developed which approximates the dispersion of a spinning symmetrical body in a vacuum, with time-varying mass and inertia characteristics, under the action of several external disturbances-initial pitching rate, thrust misalignment, and dynamic unbalance. The ratio of the roll inertia to the pitch or yaw inertia is assumed constant. Spin was found to be very effective in reducing the dispersion due to an initial pitch rate or thrust misalignment, but was completely Ineffective in reducing the dispersion of a dynamically unbalanced body.
Li, Shaohong L; Marenich, Aleksandr V; Xu, Xuefei; Truhlar, Donald G
2014-01-16
Linear response (LR) Kohn-Sham (KS) time-dependent density functional theory (TDDFT), or KS-LR, has been widely used to study electronically excited states of molecules and is the method of choice for large and complex systems. The Tamm-Dancoff approximation to TDDFT (TDDFT-TDA or KS-TDA) gives results similar to KS-LR and alleviates the instability problem of TDDFT near state intersections. However, KS-LR and KS-TDA share a debilitating feature; conical intersections of the reference state and a response state occur in F - 1 instead of the correct F - 2 dimensions, where F is the number of internal degrees of freedom. Here, we propose a new method, named the configuration interaction-corrected Tamm-Dancoff approximation (CIC-TDA), that eliminates this problem. It calculates the coupling between the reference state and an intersecting response state by interpreting the KS reference-state Slater determinant and linear response as if they were wave functions. Both formal analysis and test results show that CIC-TDA gives similar results to KS-TDA far from a conical intersection, but the intersection occurs with the correct dimensionality. We anticipate that this will allow more realistic application of TDDFT to photochemistry.
NASA Astrophysics Data System (ADS)
Izsák, Róbert; Neese, Frank
2013-07-01
The 'chain of spheres' approximation, developed earlier for the efficient evaluation of the self-consistent field exchange term, is introduced here into the evaluation of the external exchange term of higher order correlation methods. Its performance is studied in the specific case of the spin-component-scaled third-order Møller--Plesset perturbation (SCS-MP3) theory. The results indicate that the approximation performs excellently in terms of both computer time and achievable accuracy. Significant speedups over a conventional method are obtained for larger systems and basis sets. Owing to this development, SCS-MP3 calculations on molecules of the size of penicillin (42 atoms) with a polarised triple-zeta basis set can be performed in ∼3 hours using 16 cores of an Intel Xeon E7-8837 processor with a 2.67 GHz clock speed, which represents a speedup by a factor of 8-9 compared to the previously most efficient algorithm. Thus, the increased accuracy offered by SCS-MP3 can now be explored for at least medium-sized molecules.
Shedge, Sapana V; Carmona-Espíndola, Javier; Pal, Sourav; Köster, Andreas M
2010-02-18
We present a theoretical study of the polarizabilities of free and disubstituted azoarenes employing auxiliary density perturbation theory (ADPT) and the noniterative approximation to the coupled perturbed Kohn-Sham (NIA-CPKS) method. Both methods are noniterative but use different approaches to obtain the perturbed density matrix. NIA-CPKS is different from the conventional CPKS approach in that the perturbed Kohn-Sham matrix is obtained numerically, thereby yielding a single-step solution to CPKS. ADPT is an alternative approach to the analytical CPKS method in the framework of the auxiliary density functional theory. It is shown that the polarizabilities obtained using these two methods are in good agreement with each other. Comparisons are made for disubstituted azoarenes, which give support to the push-pull mechanism. Both methods reproduce the same trend for polarizabilities because of the substitution pattern of the azoarene moiety. Our results are consistent with the standard organic chemistry "activating/deactivating" sequence. We present the polarizabilities of the above molecules calculated with three different exchange-correlation functionals and two different auxiliary function sets. The computational advantages of both methods are also discussed.
NASA Astrophysics Data System (ADS)
Yang, Lei; Yan, Hongyong; Liu, Hong
2017-03-01
Implicit staggered-grid finite-difference (ISFD) scheme is competitive for its great accuracy and stability, whereas its coefficients are conventionally determined by the Taylor-series expansion (TE) method, leading to a loss in numerical precision. In this paper, we modify the TE method using the minimax approximation (MA), and propose a new optimal ISFD scheme based on the modified TE (MTE) with MA method. The new ISFD scheme takes the advantage of the TE method that guarantees great accuracy at small wavenumbers, and keeps the property of the MA method that keeps the numerical errors within a limited bound at the same time. Thus, it leads to great accuracy for numerical solution of the wave equations. We derive the optimal ISFD coefficients by applying the new method to the construction of the objective function, and using a Remez algorithm to minimize its maximum. Numerical analysis is made in comparison with the conventional TE-based ISFD scheme, indicating that the MTE-based ISFD scheme with appropriate parameters can widen the wavenumber range with high accuracy, and achieve greater precision than the conventional ISFD scheme. The numerical modeling results also demonstrate that the MTE-based ISFD scheme performs well in elastic wave simulation, and is more efficient than the conventional ISFD scheme for elastic modeling.
Yoshikawa, Takeshi; Nakai, Hiromi
2015-01-30
Graphical processing units (GPUs) are emerging in computational chemistry to include Hartree-Fock (HF) methods and electron-correlation theories. However, ab initio calculations of large molecules face technical difficulties such as slow memory access between central processing unit and GPU and other shortfalls of GPU memory. The divide-and-conquer (DC) method, which is a linear-scaling scheme that divides a total system into several fragments, could avoid these bottlenecks by separately solving local equations in individual fragments. In addition, the resolution-of-the-identity (RI) approximation enables an effective reduction in computational cost with respect to the GPU memory. The present study implemented the DC-RI-HF code on GPUs using math libraries, which guarantee compatibility with future development of the GPU architecture. Numerical applications confirmed that the present code using GPUs significantly accelerated the HF calculations while maintaining accuracy. © 2014 Wiley Periodicals, Inc.
Multicriteria approximation through decomposition
Burch, C.; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E.
1998-06-01
The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of their technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. Their method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) the authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing; (2) they also show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.
Multicriteria approximation through decomposition
Burch, C. |; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E. |
1997-12-01
The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of the technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. The method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) The authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing. (2) They show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.
NASA Astrophysics Data System (ADS)
Skorupski, Krzysztof
2015-05-01
Black carbon (BC) particles are a product of incomplete combustion of carbon-based fuels. One of the possibilities of studying the optical properties of BC structures is to use the DDA (Discrete Dipole Approximation) method. The main goal of this work was to investigate its accuracy and to approximate the most reliable simulation parameters. For the light scattering simulations the ADDA code was used and for the reference program the superposition T-Matrix code by Mackowski was selected. The study was divided into three parts. First, DDA simulations for a single particle (sphere) were performed. The results proved that the meshing algorithm can significantly affect the particle shape, and therefore, the extinction diagrams. The volume correction procedure is recommended for sparse or asymmetrical meshes. In the next step large fractal-like aggregates were investigated. When sparse meshes are used, the impact of the volume correction procedure cannot be easily predicted. In some cases it can even lead to more erroneous results. Finally, the optical properties of fractal-like aggregates composed of spheres in point contact were compared to much more realistic structures made up of connected, non-spherical primary particles.
NASA Astrophysics Data System (ADS)
Bozkaya, Uǧur; Sherrill, C. David
2017-07-01
An efficient implementation of analytic gradients for the coupled-cluster singles and doubles with perturbative triples [CCSD(T)] method with the density-fitting (DF) approximation, denoted as DF-CCSD(T), is reported. For the molecules considered, the DF approach substantially accelerates conventional CCSD(T) analytic gradients due to the reduced input/output time and the acceleration of the so-called "gradient terms": formation of particle density matrices (PDMs), computation of the generalized Fock-matrix (GFM), solution of the Z-vector equation, formation of the effective PDMs and GFM, back-transformation of the PDMs and GFM, from the molecular orbital to the atomic orbital (AO) basis, and computation of gradients in the AO basis. For the largest member of the molecular test set considered (C6H14), the computational times for analytic gradients (with the correlation-consistent polarized valence triple-ζ basis set in serial) are 106.2 [CCSD(T)] and 49.8 [DF-CCSD(T)] h, a speedup of more than 2-fold. In the evaluation of gradient terms, the DF approach completely avoids the use of four-index two-electron integrals. Similar to our previous studies on DF-second-order Møller-Plesset perturbation theory and DF-CCSD gradients, our formalism employs 2- and 3-index two-particle density matrices (TPDMs) instead of 4-index TPDMs. Errors introduced by the DF approximation are negligible for equilibrium geometries and harmonic vibrational frequencies.
Bozkaya, Uğur; Sherrill, C David
2017-07-28
An efficient implementation of analytic gradients for the coupled-cluster singles and doubles with perturbative triples [CCSD(T)] method with the density-fitting (DF) approximation, denoted as DF-CCSD(T), is reported. For the molecules considered, the DF approach substantially accelerates conventional CCSD(T) analytic gradients due to the reduced input/output time and the acceleration of the so-called "gradient terms": formation of particle density matrices (PDMs), computation of the generalized Fock-matrix (GFM), solution of the Z-vector equation, formation of the effective PDMs and GFM, back-transformation of the PDMs and GFM, from the molecular orbital to the atomic orbital (AO) basis, and computation of gradients in the AO basis. For the largest member of the molecular test set considered (C6H14), the computational times for analytic gradients (with the correlation-consistent polarized valence triple-ζ basis set in serial) are 106.2 [CCSD(T)] and 49.8 [DF-CCSD(T)] h, a speedup of more than 2-fold. In the evaluation of gradient terms, the DF approach completely avoids the use of four-index two-electron integrals. Similar to our previous studies on DF-second-order Møller-Plesset perturbation theory and DF-CCSD gradients, our formalism employs 2- and 3-index two-particle density matrices (TPDMs) instead of 4-index TPDMs. Errors introduced by the DF approximation are negligible for equilibrium geometries and harmonic vibrational frequencies.
NASA Astrophysics Data System (ADS)
Mozharovskiy, A. V.; Artemenko, A. A.; Mal'tsev, A. A.; Maslennikov, R. O.; Sevast'yanov, A. G.; Ssorin, V. N.
2015-11-01
We develop a combined method for calculating the characteristics of the integrated lens antennas for millimeter-wave wireless local radio-communication systems on the basis of the geometrical and physical optics approximations. The method is based on the concepts of geometrical optics for calculating the electromagnetic-field distribution on the lens surface (with allowance for multiple internal re-reflections) and physical optics for determining the antenna-radiated fields in the Fraunhofer zone. Using the developed combined method, we study various integrated lens antennas on the basis of the data on the used-lens shape and material and the primary-feed radiation model, which is specified analytically or by computer simulation. Optimal values of the cylindrical-extension length, which ensure the maximum antenna directivity equal to 19.1 and 23.8 dBi for the greater and smaller lenses, respectively, are obtained for the hemispherical quartz-glass lenses having the cylindrical extensions with radii of 7.5 and 12.5 mm. In this case, the scanning-angle range of the considered antennas is greater than ±20° for an admissible 2-dB decrease in the directivity of the deflected beam. The calculation results obtained using the developed method are confirmed by the experimental studies performed for the prototypes of the integrated quartz-glass lens antennas within the framework of this research.
NASA Technical Reports Server (NTRS)
Shirts, R. B.; Reinhardt, W. P.
1982-01-01
Substantial short time regularity, even in the chaotic regions of phase space, is found for what is seen as a large class of systems. This regularity manifests itself through the behavior of approximate constants of motion calculated by Pade summation of the Birkhoff-Gustavson normal form expansion; it is attributed to remnants of destroyed invariant tori in phase space. The remnant torus-like manifold structures are used to justify Einstein-Brillouin-Keller semiclassical quantization procedures for obtaining quantum energy levels, even in the absence of complete tori. They also provide a theoretical basis for the calculation of rate constants for intramolecular mode-mode energy transfer. These results are illustrated by means of a thorough analysis of the Henon-Heiles oscillator problem. Possible generality of the analysis is demonstrated by brief consideration of classical dynamics for the Barbanis Hamiltonian, Zeeman effect in hydrogen and recent results of Wolf and Hase (1980) for the H-C-C fragment.
NASA Technical Reports Server (NTRS)
Shirts, R. B.; Reinhardt, W. P.
1982-01-01
Substantial short time regularity, even in the chaotic regions of phase space, is found for what is seen as a large class of systems. This regularity manifests itself through the behavior of approximate constants of motion calculated by Pade summation of the Birkhoff-Gustavson normal form expansion; it is attributed to remnants of destroyed invariant tori in phase space. The remnant torus-like manifold structures are used to justify Einstein-Brillouin-Keller semiclassical quantization procedures for obtaining quantum energy levels, even in the absence of complete tori. They also provide a theoretical basis for the calculation of rate constants for intramolecular mode-mode energy transfer. These results are illustrated by means of a thorough analysis of the Henon-Heiles oscillator problem. Possible generality of the analysis is demonstrated by brief consideration of classical dynamics for the Barbanis Hamiltonian, Zeeman effect in hydrogen and recent results of Wolf and Hase (1980) for the H-C-C fragment.
NASA Technical Reports Server (NTRS)
Merrill, W. C.
1978-01-01
The Routh approximation technique for reducing the complexity of system models was applied in the frequency domain to a 16th order, state variable model of the F100 engine and to a 43d order, transfer function model of a launch vehicle boost pump pressure regulator. The results motivate extending the frequency domain formulation of the Routh method to the time domain in order to handle the state variable formulation directly. The time domain formulation was derived and a characterization that specifies all possible Routh similarity transformations was given. The characterization was computed by solving two eigenvalue-eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given. Additional computational problems are discussed, including an optimization procedure that can improve the approximation accuracy by taking advantage of the transformation characterization.
NASA Astrophysics Data System (ADS)
ANDRE, Frédéric; HOU, Longfeng; SOLOVJOV, Vladimir P.
2016-01-01
The main restriction of k-distribution approaches for applications in radiative heat transfer in gaseous media arises from the use of a scaling or correlation assumption to treat non-uniform situations. It is shown that those cases can be handled exactly by using a multidimensional k-distribution that addresses the problem of spectral correlations without using any simplifying assumptions. Nevertheless, the approach cannot be suggested for engineering applications due to its computational cost. Accordingly, a more efficient method, based on the so-called Multi-Spectral Framework, is proposed to approximate the previous exact formulation. The model is assessed against reference LBL calculations and shown to outperform usual k-distribution approaches for radiative heat transfer in non-uniform media.
NASA Astrophysics Data System (ADS)
Linnera, J.; Karttunen, A. J.
2017-07-01
The lattice thermal conductivity of Cu2O was studied using ab initio density functional methods. The performance of generalized gradient approximation (GGA), GGA-PBE, and PBE0 exchange-correlation functionals was compared for various electronic and phonon-related properties. The 3 d transition metal oxides such as Cu2O are known to be a challenging case for pure GGA functionals, and in comparison to the GGA-PBE the PBE0 hybrid functional clearly improves the description of both electronic and phonon-related properties. The most striking difference is found in the lattice thermal conductivity, where the GGA underestimates it as much as 40% in comparison to experiments, while the difference between the experiment and the PBE0 hybrid functional is only a few percent.
Suebka, P.
1984-01-01
In Part I, the excitation spectrum of liquid He II is obtained using the two-body potential consists of a hardcore potential plus an outside attractive potential. The sum of two gaussian potential of Khanna and Das which is similar to the Lennard-Jones potential is chosen as the attractive potential. The t-matrix method due to Brueckner and Sawada is adopted with modifications to replace the interaction potential. The spectrum gives the phonon branch and the roton dip which resemble the excitation spectrum for liquid He II. The temperature dependence of the excitation spectrum enters into calculation through the zero-momentum state occupation number. A better approximation of thermodynamic functions is obtained by extending Landau's theory to the situation where the excitation is a function of temperature as well as of momentum. Our thermodynamic calculations also bear qualitative agreement with measurements on He II as expected.
NASA Astrophysics Data System (ADS)
Pérez, Alejandro; Tuckerman, Mark E.; Müser, Martin H.
2009-05-01
The problems of ergodicity and internal consistency in the centroid and ring-polymer molecular dynamics methods are addressed in the context of a comparative study of the two methods. Enhanced sampling in ring-polymer molecular dynamics (RPMD) is achieved by first performing an equilibrium path integral calculation and then launching RPMD trajectories from selected, stochastically independent equilibrium configurations. It is shown that this approach converges more rapidly than periodic resampling of velocities from a single long RPMD run. Dynamical quantities obtained from RPMD and centroid molecular dynamics (CMD) are compared to exact results for a variety of model systems. Fully converged results for correlations functions are presented for several one dimensional systems and para-hydrogen near its triple point using an improved sampling technique. Our results indicate that CMD shows very similar performance to RPMD. The quality of each method is further assessed via a new χ2 descriptor constructed by transforming approximate real-time correlation functions from CMD and RPMD trajectories to imaginary time and comparing these to numerically exact imaginary time correlation functions. For para-hydrogen near its triple point, it is found that adiabatic CMD and RPMD both have similar χ2 error.
Beyond the Kirchhoff approximation
NASA Technical Reports Server (NTRS)
Rodriguez, Ernesto
1989-01-01
The three most successful models for describing scattering from random rough surfaces are the Kirchhoff approximation (KA), the small-perturbation method (SPM), and the two-scale-roughness (or composite roughness) surface-scattering (TSR) models. In this paper it is shown how these three models can be derived rigorously from one perturbation expansion based on the extinction theorem for scalar waves scattering from perfectly rigid surface. It is also shown how corrections to the KA proportional to the surface curvature and higher-order derivatives may be obtained. Using these results, the scattering cross section is derived for various surface models.
Langaas, Mette; Bakke, Øyvind
2014-12-01
In genetic association studies, detecting disease-genotype association is a primary goal. We study seven robust test statistics for such association when the underlying genetic model is unknown, for data on disease status (case or control) and genotype (three genotypes of a biallelic genetic marker). In such studies, p-values have predominantly been calculated by asymptotic approximations or by simulated permutations. We consider an exact method, conditional enumeration. When the number of simulated permutations tends to infinity, the permutation p-value approaches the conditional enumeration p-value, but calculating the latter is much more efficient than performing simulated permutations. We have studied case-control sample sizes with 500-5000 cases and 500-15,000 controls, and significance levels from 5 × 10(-8) to 0.05, thus our results are applicable to genetic association studies with only a few genetic markers under study, intermediate follow-up studies, and genome-wide association studies. Our main findings are: (i) If all monotone genetic models are of interest, the best performance in the situations under study is achieved for the robust test statistics based on the maximum over a range of Cochran-Armitage trend tests with different scores and for the constrained likelihood ratio test. (ii) For significance levels below 0.05, for the test statistics under study, asymptotic approximations may give a test size up to 20 times the nominal level, and should therefore be used with caution. (iii) Calculating p-values based on exact conditional enumeration is a powerful, valid and computationally feasible approach, and we advocate its use in genetic association studies.
Ball, J.R.
1986-04-01
This document is a supplement to a ''Handbook for Cost Estimating'' (NUREG/CR-3971) and provides specific guidance for developing ''quick'' approximate estimates of the cost of implementing generic regulatory requirements for nuclear power plants. A method is presented for relating the known construction costs for new nuclear power plants (as contained in the Energy Economic Data Base) to the cost of performing similar work, on a back-fit basis, at existing plants. Cost factors are presented to account for variations in such important cost areas as construction labor productivity, engineering and quality assurance, replacement energy, reworking of existing features, and regional variations in the cost of materials and labor. Other cost categories addressed in this handbook include those for changes in plant operating personnel and plant documents, licensee costs, NRC costs, and costs for other government agencies. Data sheets, worksheets, and appropriate cost algorithms are included to guide the user through preparation of rough estimates. A sample estimate is prepared using the method and the estimating tools provided.
NASA Astrophysics Data System (ADS)
Galván, I. Fdez; Sánchez, M. L.; Martín, M. E.; Olivares del Valle, F. J.; Aguilar, M. A.
2003-11-01
ASEP/MD is a computer program designed to implement the Averaged Solvent Electrostatic Potential/Molecular Dynamics (ASEP/MD) method developed by our group. It can be used for the study of solvent effects and properties of molecules in their liquid state or in solution. It is written in the FORTRAN90 programming language, and should be easy to follow, understand, maintain and modify. Given the nature of the ASEP/MD method, external programs are needed for the quantum calculations and molecular dynamics simulations. The present version of ASEP/MD includes interface routines for the GAUSSIAN package, HONDO, and MOLDY, but adding support for other programs is straightforward. This article describes the program and its usage. Program summaryTitle of program: ASEP/MD Catalogue identifier:ADSF Program Summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSF Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed: it has been tested on Intel-based PC and Sun Operating systems under which the program has been tested: Red Hat Linux 7.2 and SunOS 5.6 Programming language used: FORTRAN90 Memory required to execute with typical data: greatly depends on the system No. of processors used: 1 Has the code been vectorized or parallelized?: no No. of bytes in distributed program, including test data, etc.: 44 544 Distribution format: tar gzip file Keywords: Solvent effects, QM/MM methods, mean field approximation, geometry optimization Nature of physical problem: The study of molecules in solution with quantum methods is a difficult task because of the large number of molecules and configurations that must be taken into account. The quantum mechanics/molecular mechanics methods proposed to date either require massive computational power or oversimplify the solute quantum description. Method of solution: A non-traditional QM/MM method based on the mean field approximation was developed where a classical molecular
NASA Astrophysics Data System (ADS)
Abdel Wahab, N. H.; Salah, Ahmed
2015-05-01
In this paper, the interaction of a three-level -configration atom and a one-mode quantized electromagnetic cavity field has been studied. The detuning parameters, the Kerr nonlinearity and the arbitrary form of both the field and intensity-dependent atom-field coupling have been taken into account. The wave function when the atom and the field are initially prepared in the excited state and coherent state, respectively, by using the Schrödinger equation has been given. The analytical approximation solution of this model has been obtained by using the modified homotopy analysis method (MHAM). The homotopy analysis method is mentioned summarily. MHAM can be obtained from the homotopy analysis method (HAM) applied to Laplace, inverse Laplace transform and Pade approximate. MHAM is used to increase the accuracy and accelerate the convergence rate of truncated series solution obtained by the HAM. The time-dependent parameters of the anti-bunching of photons, the amplitude-squared squeezing and the coherent properties have been calculated. The influence of the detuning parameters, Kerr nonlinearity and photon number operator on the temporal behavior of these phenomena have been analyzed. We noticed that the considered system is sensitive to variations in the presence of these parameters.
Chalasani, P.; Saias, I.; Jha, S.
1996-04-08
As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.
Asgharzadeh, Hafez; Borazjani, Iman
2017-02-15
diagonal of the Jacobian further improves the performance by 42 - 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.
NASA Astrophysics Data System (ADS)
Asgharzadeh, Hafez; Borazjani, Iman
2017-02-01
diagonal of the Jacobian further improves the performance by 42-74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal and full Jacobian, respectivley, when the stretching factor was increased. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.
Phenomenological applications of rational approximants
NASA Astrophysics Data System (ADS)
Gonzàlez-Solís, Sergi; Masjuan, Pere
2016-08-01
We illustrate the powerfulness of Padé approximants (PAs) as a summation method and explore one of their extensions, the so-called quadratic approximant (QAs), to access both space- and (low-energy) time-like (TL) regions. As an introductory and pedagogical exercise, the function 1 zln(1 + z) is approximated by both kind of approximants. Then, PAs are applied to predict pseudoscalar meson Dalitz decays and to extract Vub from the semileptonic B → πℓνℓ decays. Finally, the π vector form factor in the TL region is explored using QAs.
Edison, John R; Monson, Peter A
2014-07-14
Recently we have developed a dynamic mean field theory (DMFT) for lattice gas models of fluids in porous materials [P. A. Monson, J. Chem. Phys. 128(8), 084701 (2008)]. The theory can be used to describe the relaxation processes in the approach to equilibrium or metastable states for fluids in pores and is especially useful for studying system exhibiting adsorption/desorption hysteresis. In this paper we discuss the extension of the theory to higher order by means of the path probability method (PPM) of Kikuchi and co-workers. We show that this leads to a treatment of the dynamics that is consistent with thermodynamics coming from the Bethe-Peierls or Quasi-Chemical approximation for the equilibrium or metastable equilibrium states of the lattice model. We compare the results from the PPM with those from DMFT and from dynamic Monte Carlo simulations. We find that the predictions from PPM are qualitatively similar to those from DMFT but give somewhat improved quantitative accuracy, in part due to the superior treatment of the underlying thermodynamics. This comes at the cost of greater computational expense associated with the larger number of equations that must be solved.
NASA Astrophysics Data System (ADS)
Bacskay, George B.
1980-05-01
The vertical valence ionization potentials of Ne, H 2O and N 2 have been calculated by Rayleigh-Schrödinger perturbation and configuration interaction methods. The calculations were carried out in the space of a single determinant reference state and its single and double excitations, using both the N and N - 1 electron Hartree-Fock orbitals as hole/particle bases. The perturbation series for the ion state were generally found to converge fairly slowly in the N electron Hartree-Fock (frozen) orbital basis, but considerably faster in the appropriate N - 1 electron RHF (relaxed) orbital basis. In certain cases, however, due to near-degeneracy effects, partial, and even complete, breakdown of the (non-degenerate) perturbation treatment was observed. The effects of higher excitations on the ionization potentials were estimated by the approximate coupled pair techniques CPA' and CPA″ as well as by a Davidson type correction formula. The final, fully converged CPA″ results are generally in good agreement with those from PNO-CEPA and Green's function calculations as well as experiment.
Edison, John R.; Monson, Peter A.
2014-07-14
Recently we have developed a dynamic mean field theory (DMFT) for lattice gas models of fluids in porous materials [P. A. Monson, J. Chem. Phys. 128(8), 084701 (2008)]. The theory can be used to describe the relaxation processes in the approach to equilibrium or metastable states for fluids in pores and is especially useful for studying system exhibiting adsorption/desorption hysteresis. In this paper we discuss the extension of the theory to higher order by means of the path probability method (PPM) of Kikuchi and co-workers. We show that this leads to a treatment of the dynamics that is consistent with thermodynamics coming from the Bethe-Peierls or Quasi-Chemical approximation for the equilibrium or metastable equilibrium states of the lattice model. We compare the results from the PPM with those from DMFT and from dynamic Monte Carlo simulations. We find that the predictions from PPM are qualitatively similar to those from DMFT but give somewhat improved quantitative accuracy, in part due to the superior treatment of the underlying thermodynamics. This comes at the cost of greater computational expense associated with the larger number of equations that must be solved.
Structural optimization with approximate sensitivities
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.
1994-01-01
Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.
Rasin, A.
1994-04-01
We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.
Adaptive approximation models in optimization
Voronin, A.N.
1995-05-01
The paper proposes a method for optimization of functions of several variables that substantially reduces the number of objective function evaluations compared to traditional methods. The method is based on the property of iterative refinement of approximation models of the optimand function in approximation domains that contract to the extremum point. It does not require subjective specification of the starting point, step length, or other parameters of the search procedure. The method is designed for efficient optimization of unimodal functions of several (not more than 10-15) variables and can be applied to find the global extremum of polymodal functions and also for optimization of scalarized forms of vector objective functions.
NASA Astrophysics Data System (ADS)
Niiniluoto, Ilkka
2014-03-01
Approximation of laws is an important theme in the philosophy of science. If we can make sense of the idea that two scientific laws are "close" to each other, then we can also analyze such methodological notions as approximate explanation of laws, approximate reduction of theories, approximate empirical success of theories, and approximate truth of laws. Proposals for measuring the distance between quantitative scientific laws were given in Niiniluoto (1982, 1987). In this paper, these definitions are reconsidered as a response to the interesting critical remarks by Liu (1999).
Fast approximate stochastic tractography.
Iglesias, Juan Eugenio; Thompson, Paul M; Liu, Cheng-Yi; Tu, Zhuowen
2012-01-01
Many different probabilistic tractography methods have been proposed in the literature to overcome the limitations of classical deterministic tractography: (i) lack of quantitative connectivity information; and (ii) robustness to noise, partial volume effects and selection of seed region. However, these methods rely on Monte Carlo sampling techniques that are computationally very demanding. This study presents an approximate stochastic tractography algorithm (FAST) that can be used interactively, as opposed to having to wait several minutes to obtain the output after marking a seed region. In FAST, tractography is formulated as a Markov chain that relies on a transition tensor. The tensor is designed to mimic the features of a well-known probabilistic tractography method based on a random walk model and Monte-Carlo sampling, but can also accommodate other propagation rules. Compared to the baseline algorithm, our method circumvents the sampling process and provides a deterministic solution at the expense of partially sacrificing sub-voxel accuracy. Therefore, the method is strictly speaking not stochastic, but provides a probabilistic output in the spirit of stochastic tractography methods. FAST was compared with the random walk model using real data from 10 patients in two different ways: 1. the probability maps produced by the two methods on five well-known fiber tracts were directly compared using metrics from the image registration literature; and 2. the connectivity measurements between different regions of the brain given by the two methods were compared using the correlation coefficient ρ. The results show that the connectivity measures provided by the two algorithms are well-correlated (ρ = 0.83), and so are the probability maps (normalized cross correlation 0.818 ± 0.081). The maps are also qualitatively (i.e., visually) very similar. The proposed method achieves a 60x speed-up (7 s vs. 7 min) over the Monte Carlo sampling scheme, therefore
Hayes, S; Taylor, R; Paterson, A
2005-12-01
Forensic facial approximation involves building a likeness of the head and face on the skull of an unidentified individual, with the aim that public broadcast of the likeness will trigger recognition in those who knew the person in life. This paper presents an overview of the collaborative practice between Ronn Taylor (Forensic Sculptor to the Victorian Institute of Forensic Medicine) and Detective Sergeant Adrian Paterson (Victoria Police Criminal Identification Squad). This collaboration involves clay modelling to determine an approximation of the person's head shape and feature location, with surface texture and more speculative elements being rendered digitally onto an image of the model. The advantages of this approach are that through clay modelling anatomical contouring is present, digital enhancement resolves some of the problems of visual perception of a representation, such as edge and shape determination, and the approximation can be easily modified as and when new information is received.
Approximate symmetries of Hamiltonians
NASA Astrophysics Data System (ADS)
Chubb, Christopher T.; Flammia, Steven T.
2017-08-01
We explore the relationship between approximate symmetries of a gapped Hamiltonian and the structure of its ground space. We start by considering approximate symmetry operators, defined as unitary operators whose commutators with the Hamiltonian have norms that are sufficiently small. We show that when approximate symmetry operators can be restricted to the ground space while approximately preserving certain mutual commutation relations. We generalize the Stone-von Neumann theorem to matrices that approximately satisfy the canonical (Heisenberg-Weyl-type) commutation relations and use this to show that approximate symmetry operators can certify the degeneracy of the ground space even though they only approximately form a group. Importantly, the notions of "approximate" and "small" are all independent of the dimension of the ambient Hilbert space and depend only on the degeneracy in the ground space. Our analysis additionally holds for any gapped band of sufficiently small width in the excited spectrum of the Hamiltonian, and we discuss applications of these ideas to topological quantum phases of matter and topological quantum error correcting codes. Finally, in our analysis, we also provide an exponential improvement upon bounds concerning the existence of shared approximate eigenvectors of approximately commuting operators under an added normality constraint, which may be of independent interest.
Exponential approximations in optimal design
NASA Technical Reports Server (NTRS)
Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.
1990-01-01
One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.
Approximation techniques for neuromimetic calculus.
Vigneron, V; Barret, C
1999-06-01
Approximation Theory plays a central part in modern statistical methods, in particular in Neural Network modeling. These models are able to approximate a large amount of metric data structures in their entire range of definition or at least piecewise. We survey most of the known results for networks of neurone-like units. The connections to classical statistical ideas such as ordinary least squares (LS) are emphasized.
Gadgets, approximation, and linear programming
Trevisan, L.; Sudan, M.; Sorkin, G.B.; Williamson, D.P.
1996-12-31
We present a linear-programming based method for finding {open_quotes}gadgets{close_quotes}, i.e., combinatorial structures reducing constraints of one optimization problems to constraints of another. A key step in this method is a simple observation which limits the search space to a finite one. Using this new method we present a number of new, computer-constructed gadgets for several different reductions. This method also answers a question posed by on how to prove the optimality of gadgets-we show how LP duality gives such proofs. The new gadgets improve hardness results for MAX CUT and MAX DICUT, showing that approximating these problems to within factors of 60/61 and 44/45 respectively is N P-hard. We also use the gadgets to obtain an improved approximation algorithm for MAX 3SAT which guarantees an approximation ratio of .801. This improves upon the previous best bound of .7704.
1951-06-01
qulvaltet:Wnter »rtoNq; eoefficie^.,- t; ..„*hieh ls *ha -.5 „Is, •£ H5 coefficient of tte displaceAenS jTä- a lineal differential...34 _ ._ section 1) were treated. Signer order approximations generally lead to a set of two or more {coupled) algebraic equations for_ the two G~-acr
Rebolini, Elisa; Izsák, Róbert; Reine, Simen Sommerfelt; Helgaker, Trygve; Pedersen, Thomas Bondo
2016-08-09
We compare the performance of three approximate methods for speeding up evaluation of the exchange contribution in Hartree-Fock and hybrid Kohn-Sham calculations: the chain-of-spheres algorithm (COSX; Neese , F. Chem. Phys. 2008 , 356 , 98 - 109 ), the pair-atomic resolution-of-identity method (PARI-K; Merlot , P. J. Comput. Chem. 2013 , 34 , 1486 - 1496 ), and the auxiliary density matrix method (ADMM; Guidon , M. J. Chem. Theory Comput. 2010 , 6 , 2348 - 2364 ). Both the efficiency relative to that of a conventional linear-scaling algorithm and the accuracy of total, atomization, and orbital energies are compared for a subset containing 25 of the 200 molecules in the Rx200 set using double-, triple-, and quadruple-ζ basis sets. The accuracy of relative energies is further compared for small alkane conformers (ACONF test set) and Diels-Alder reactions (DARC test set). Overall, we find that the COSX method provides good accuracy for orbital energies as well as total and relative energies, and the method delivers a satisfactory speedup. The PARI-K and in particular ADMM algorithms require further development and optimization to fully exploit their indisputable potential.
NASA Technical Reports Server (NTRS)
Dutta, Soumitra
1988-01-01
A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.