Diagonal Pade approximations for initial value problems
Reusch, M.F.; Ratzan, L.; Pomphrey, N.; Park, W.
1987-06-01
Diagonal Pade approximations to the time evolution operator for initial value problems are applied in a novel way to the numerical solution of these problems by explicitly factoring the polynomials of the approximation. A remarkable gain over conventional methods in efficiency and accuracy of solution is obtained. 20 refs., 3 figs., 1 tab.
Type II Hermite-Pade approximation to the exponential function
NASA Astrophysics Data System (ADS)
Kuijlaars, A. B. J.; Stahl, H.; van Assche, W.; Wielonsky, F.
2007-10-01
We obtain strong and uniform asymptotics in every domain of the complex plane for the scaled polynomials a(3nz), b(3nz), and c(3nz) where a, b, and c are the type II Hermite-Pade approximants to the exponential function of respective degrees 2n+2, 2n and 2n, defined by and as z-->0. Our analysis relies on a characterization of these polynomials in terms of a 3x3 matrix Riemann-Hilbert problem which, as a consequence of the famous Mahler relations, corresponds by a simple transformation to a similar Riemann-Hilbert problem for type I Hermite-Pade approximants. Due to this relation, the study that was performed in previous work, based on the Deift-Zhou steepest descent method for Riemann-Hilbert problems, can be reused to establish our present results.
Convergence of multipoint Pade approximants of piecewise analytic functions
Buslaev, Viktor I
2013-02-28
The behaviour as n{yields}{infinity} of multipoint Pade approximants to a function which is (piecewise) holomorphic on a union of finitely many continua is investigated. The convergence of multipoint Pade approximants is proved for a function which extends holomorphically from these continua to a union of domains whose boundaries have a certain symmetry property. An analogue of Stahl's theorem is established for two-point Pade approximants to a pair of functions, either of which is a multivalued analytic function with finitely many branch points. Bibliography: 11 titles.
Unfolding the Second Riemann sheet with Pade Approximants: hunting resonance poles
Masjuan, Pere
2011-05-23
Based on Pade Theory, a new procedure for extracting the pole mass and width of resonances is proposed. The method is systematic and provides a model-independent treatment for the prediction and the errors of the approximation.
Trigonometric Pade approximants for functions with regularly decreasing Fourier coefficients
Labych, Yuliya A; Starovoitov, Alexander P
2009-08-31
Sufficient conditions describing the regular decrease of the coefficients of a Fourier series f(x)=a{sub 0}/2 + {sigma} a{sub n} cos kx are found which ensure that the trigonometric Pade approximants {pi}{sup t}{sub n,m}(x;f) converge to the function f in the uniform norm at a rate which coincides asymptotically with the highest possible one. The results obtained are applied to problems dealing with finding sharp constants for rational approximations. Bibliography: 31 titles.
PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST
PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST
NASA Technical Reports Server (NTRS)
Vepa, R.
1976-01-01
The general behavior of unsteady airloads in the frequency domain is explained. Based on this, a systematic procedure is described whereby the airloads, produced by completely arbitrary, small, time-dependent motions of a thin lifting surface in an airstream, can be predicted. This scheme employs as raw materials any of the unsteady linearized theories that have been mechanized for simple harmonic oscillations. Each desired aerodynamic transfer function is approximated by means of an appropriate Pade approximant, that is, a rational function of finite degree polynomials in the Laplace transform variable. Although these approximations have many uses, they are proving especially valuable in the design of automatic control systems intended to modify aeroelastic behavior.
Semi-Implicit Operator Splitting Pade Method For Vector HNLS Solitons
Aziez, Siham; Smadi, Moussa; Bahloul, Derradji
2008-09-23
We use in this paper the semi-implicit finite difference operator splitting Pade(OSPD) method for solving the coupled higher-order nonlinear Schroedinger equation which describes the propagation of vector solitons in optical fibers. This method having a fourth order accuracy in space shows good stability and efficiency for the coupled HNLS equations describing vector solitons. We have tested this method for analyzing the behavior of optical pulses in birefringent fibers verifying that the third order dispersion TOD has different effects on the two polarizations and the asymmetric oscillation is significant only in one polarization.
A hybrid Pade-Galerkin technique for differential equations
NASA Technical Reports Server (NTRS)
Geer, James F.; Andersen, Carl M.
1993-01-01
A three-step hybrid analysis technique, which successively uses the regular perturbation expansion method, the Pade expansion method, and then a Galerkin approximation, is presented and applied to some model boundary value problems. In the first step of the method, the regular perturbation method is used to construct an approximation to the solution in the form of a finite power series in a small parameter epsilon associated with the problem. In the second step of the method, the series approximation obtained in step one is used to construct a Pade approximation in the form of a rational function in the parameter epsilon. In the third step, the various powers of epsilon which appear in the Pade approximation are replaced by new (unknown) parameters (delta(sub j)). These new parameters are determined by requiring that the residual formed by substituting the new approximation into the governing differential equation is orthogonal to each of the perturbation coordinate functions used in step one. The technique is applied to model problems involving ordinary or partial differential equations. In general, the technique appears to provide good approximations to the solution even when the perturbation and Pade approximations fail to do so. The method is discussed and topics for future investigations are indicated.
Sparse pseudospectral approximation method
NASA Astrophysics Data System (ADS)
Constantine, Paul G.; Eldred, Michael S.; Phipps, Eric T.
2012-07-01
Multivariate global polynomial approximations - such as polynomial chaos or stochastic collocation methods - are now in widespread use for sensitivity analysis and uncertainty quantification. The pseudospectral variety of these methods uses a numerical integration rule to approximate the Fourier-type coefficients of a truncated expansion in orthogonal polynomials. For problems in more than two or three dimensions, a sparse grid numerical integration rule offers accuracy with a smaller node set compared to tensor product approximation. However, when using a sparse rule to approximately integrate these coefficients, one often finds unacceptable errors in the coefficients associated with higher degree polynomials. By reexamining Smolyak's algorithm and exploiting the connections between interpolation and projection in tensor product spaces, we construct a sparse pseudospectral approximation method that accurately reproduces the coefficients of basis functions that naturally correspond to the sparse grid integration rule. The compelling numerical results show that this is the proper way to use sparse grid integration rules for pseudospectral approximation.
Sokolovski, D.; Msezane, A.Z.
2004-09-01
A semiclassical complex angular momentum theory, used to analyze atom-diatom reactive angular distributions, is applied to several well-known potential (one-particle) problems. Examples include resonance scattering, rainbow scattering, and the Eckart threshold model. Pade reconstruction of the corresponding matrix elements from the values at physical (integral) angular momenta and properties of the Pade approximants are discussed in detail.
Lin, Ying-Tsong; Collis, Jon M; Duda, Timothy F
2012-11-01
An alternating direction implicit (ADI) three-dimensional fluid parabolic equation solution method with enhanced accuracy is presented. The method uses a square-root Helmholtz operator splitting algorithm that retains cross-multiplied operator terms that have been previously neglected. With these higher-order cross terms, the valid angular range of the parabolic equation solution is improved. The method is tested for accuracy against an image solution in an idealized wedge problem. Computational efficiency improvements resulting from the ADI discretization are also discussed.
PaDe - The particle detection program
NASA Astrophysics Data System (ADS)
Ott, T.; Drolshagen, E.; Koschny, D.; Poppe, B.
2016-01-01
This paper introduces the Particle Detection program PaDe. Its aim is to analyze dust particles in the coma of the Jupiter-family comet 67P/Churyumov-Gerasimenko which were recorded by the two OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) cameras onboard the ESA spacecraft Rosetta, see e.g. Keller et al. (2007). In addition to working with the Rosetta data, the code was modified to work with images from meteors. It was tested with data recorded by the ICCs (Intensified CCD Cameras) of the CILBO-System (Canary Island Long-Baseline Observatory) on the Canary Islands; compare Koschny et al. (2013). This paper presents a new method for the position determination of the observed meteors. The PaDe program was written in Python 3.4. Its original intent is to find the trails of dust particles in space from the OSIRIS images. For that it determines the positions where the trail starts and ends. They were found using a fit following the so-called error function (Andrews, 1998) for the two edges of the profiles. The positions where the intensities fall to the half maximum were found to be the beginning and end of the particle. In the case of meteors, this method can be applied to find the leading edge of the meteor. The proposed method has the potential to increase the accuracy of the position determination of meteors dramatically. Other than the standard method of finding the photometric center, our method is not influenced by any trails or wakes behind the meteor. This paper presents first results of this ongoing work.
General Pade Effective Potential for Coulomb Problems in Condensed and Soft Matters
NASA Astrophysics Data System (ADS)
Quyen, B. L.; Mai, D. N.; Hoa, N. M.; Van, T. T. T.; Hoai, N. L.; Viet, N. A.
2014-09-01
Effective potentials for finding the ground states and physical configurations have essential meaning in many Coulomb problems of condensed and soft matters. The ordinary n-Pade approximation potentials define as the ratio of Pi(r)/Pi+1(r), where Pi(r) are the polynomials of i-th order of charge separation r, give quite good fit and agreement of calculation results and experimental data for Coulomb problems, where screening effects are not important or exchange photons still are massless. In this work we consider a general Pade effective potential by included a factor of exponential form, which could give more accurate results also for above mentioned cases. This general Pade effective potentials with analytical expressions were useful to perform analytical calculations, estimations and to reduce the amount of computational time for future investigations in condensed and soft matter topics. For example of soft matter problems, we study the case of MS2 virus, the general Pade potential gives much more correct results comparing with ordinary Pade approximation.
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Approximation methods in gravitational-radiation theory
NASA Technical Reports Server (NTRS)
Will, C. M.
1986-01-01
The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.
Second derivatives for approximate spin projection methods
Thompson, Lee M.; Hratchian, Hrant P.
2015-02-07
The use of broken-symmetry electronic structure methods is required in order to obtain correct behavior of electronically strained open-shell systems, such as transition states, biradicals, and transition metals. This approach often has issues with spin contamination, which can lead to significant errors in predicted energies, geometries, and properties. Approximate projection schemes are able to correct for spin contamination and can often yield improved results. To fully make use of these methods and to carry out exploration of the potential energy surface, it is desirable to develop an efficient second energy derivative theory. In this paper, we formulate the analytical second derivatives for the Yamaguchi approximate projection scheme, building on recent work that has yielded an efficient implementation of the analytical first derivatives.
Approximation methods in relativistic eigenvalue perturbation theory
NASA Astrophysics Data System (ADS)
Noble, Jonathan Howard
In this dissertation, three questions, concerning approximation methods for the eigenvalues of quantum mechanical systems, are investigated: (i) What is a pseudo--Hermitian Hamiltonian, and how can its eigenvalues be approximated via numerical calculations? This is a fairly broad topic, and the scope of the investigation is narrowed by focusing on a subgroup of pseudo--Hermitian operators, namely, PT--symmetric operators. Within a numerical approach, one projects a PT--symmetric Hamiltonian onto an appropriate basis, and uses a straightforward two--step algorithm to diagonalize the resulting matrix, leading to numerically approximated eigenvalues. (ii) Within an analytic ansatz, how can a relativistic Dirac Hamiltonian be decoupled into particle and antiparticle degrees of freedom, in appropriate kinematic limits? One possible answer is the Foldy--Wouthuysen transform; however, there are alter- native methods which seem to have some advantages over the time--tested approach. One such method is investigated by applying both the traditional Foldy--Wouthuysen transform and the "chiral" Foldy--Wouthuysen transform to a number of Dirac Hamiltonians, including the central-field Hamiltonian for a gravitationally bound system; namely, the Dirac-(Einstein-)Schwarzschild Hamiltonian, which requires the formal- ism of general relativity. (iii) Are there are pseudo--Hermitian variants of Dirac Hamiltonians that can be approximated using a decoupling transformation? The tachyonic Dirac Hamiltonian, which describes faster-than-light spin-1/2 particles, is gamma5--Hermitian, i.e., pseudo-Hermitian. Superluminal particles remain faster than light upon a Lorentz transformation, and hence, the Foldy--Wouthuysen program is unsuited for this case. Thus, inspired by the Foldy--Wouthuysen program, a decoupling transform in the ultrarelativistic limit is proposed, which is applicable to both sub- and superluminal particles.
Approximation methods for stochastic petri nets
NASA Technical Reports Server (NTRS)
Jungnitz, Hauke Joerg
1992-01-01
Stochastic Marked Graphs are a concurrent decision free formalism provided with a powerful synchronization mechanism generalizing conventional Fork Join Queueing Networks. In some particular cases the analysis of the throughput can be done analytically. Otherwise the analysis suffers from the classical state explosion problem. Embedded in the divide and conquer paradigm, approximation techniques are introduced for the analysis of stochastic marked graphs and Macroplace/Macrotransition-nets (MPMT-nets), a new subclass introduced herein. MPMT-nets are a subclass of Petri nets that allow limited choice, concurrency and sharing of resources. The modeling power of MPMT is much larger than that of marked graphs, e.g., MPMT-nets can model manufacturing flow lines with unreliable machines and dataflow graphs where choice and synchronization occur. The basic idea leads to the notion of a cut to split the original net system into two subnets. The cuts lead to two aggregated net systems where one of the subnets is reduced to a single transition. A further reduction leads to a basic skeleton. The generalization of the idea leads to multiple cuts, where single cuts can be applied recursively leading to a hierarchical decomposition. Based on the decomposition, a response time approximation technique for the performance analysis is introduced. Also, delay equivalence, which has previously been introduced in the context of marked graphs by Woodside et al., Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's is slower, but the accuracy is generally better. Delay
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.
Sensitivity analysis and approximation methods for general eigenvalue problems
NASA Technical Reports Server (NTRS)
Murthy, D. V.; Haftka, R. T.
1986-01-01
Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.
Approximate inverse preconditioning of iterative methods for nonsymmetric linear systems
Benzi, M.; Tuma, M.
1996-12-31
A method for computing an incomplete factorization of the inverse of a nonsymmetric matrix A is presented. The resulting factorized sparse approximate inverse is used as a preconditioner in the iterative solution of Ax = b by Krylov subspace methods.
Numerical Stability and Convergence of Approximate Methods for Conservation Laws
NASA Astrophysics Data System (ADS)
Galkin, V. A.
We present the new approach to background of approximate methods convergence based on functional solutions theory for conservation laws. The applications to physical kinetics, gas and fluid dynamics are considered.
Dual methods and approximation concepts in structural synthesis
NASA Technical Reports Server (NTRS)
Fleury, C.; Schmit, L. A., Jr.
1980-01-01
Approximation concepts and dual method algorithms are combined to create a method for minimum weight design of structural systems. Approximation concepts convert the basic mathematical programming statement of the structural synthesis problem into a sequence of explicit primal problems of separable form. These problems are solved by constructing explicit dual functions, which are maximized subject to nonnegativity constraints on the dual variables. It is shown that the joining together of approximation concepts and dual methods can be viewed as a generalized optimality criteria approach. The dual method is successfully extended to deal with pure discrete and mixed continuous-discrete design variable problems. The power of the method presented is illustrated with numerical results for example problems, including a metallic swept wing and a thin delta wing with fiber composite skins.
Intermediate boundary conditions for LOD, ADI and approximate factorization methods
NASA Technical Reports Server (NTRS)
Leveque, R. J.
1985-01-01
A general approach to determining the correct intermediate boundary conditions for dimensional splitting methods is presented. The intermediate solution U is viewed as a second order accurate approximation to a modified equation. Deriving the modified equation and using the relationship between this equation and the original equation allows us to determine the correct boundary conditions for U*. This technique is illustrated by applying it to locally one dimensional (LOD) and alternating direction implicit (ADI) methods for the heat equation in two and three space dimensions. The approximate factorization method is considered in slightly more generality.
Efficient solution of parabolic equations by Krylov approximation methods
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Y.
1990-01-01
Numerical techniques for solving parabolic equations by the method of lines is addressed. The main motivation for the proposed approach is the possibility of exploiting a high degree of parallelism in a simple manner. The basic idea of the method is to approximate the action of the evolution operator on a given state vector by means of a projection process onto a Krylov subspace. Thus, the resulting approximation consists of applying an evolution operator of a very small dimension to a known vector which is, in turn, computed accurately by exploiting well-known rational approximations to the exponential. Because the rational approximation is only applied to a small matrix, the only operations required with the original large matrix are matrix-by-vector multiplications, and as a result the algorithm can easily be parallelized and vectorized. Some relevant approximation and stability issues are discussed. We present some numerical experiments with the method and compare its performance with a few explicit and implicit algorithms.
Multi-level methods and approximating distribution functions
NASA Astrophysics Data System (ADS)
Wilson, D.; Baker, R. E.
2016-07-01
Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie's direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie's direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146-179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.
Calculating Resonance Positions and Widths Using the Siegert Approximation Method
ERIC Educational Resources Information Center
Rapedius, Kevin
2011-01-01
Here, we present complex resonance states (or Siegert states) that describe the tunnelling decay of a trapped quantum particle from an intuitive point of view that naturally leads to the easily applicable Siegert approximation method. This can be used for analytical and numerical calculations of complex resonances of both the linear and nonlinear…
Methods to approximate reliabilities in single-step genomic evaluation
Technology Transfer Automated Retrieval System (TEKTRAN)
Reliability of predictions from single-step genomic BLUP (ssGBLUP) can be calculated by inversion, but that is not feasible for large data sets. Two methods of approximating reliability were developed based on decomposition of a function of reliability into contributions from records, pedigrees, and...
An approximation concepts method for space frame synthesis
NASA Technical Reports Server (NTRS)
Mills-Curran, W. C.; Lust, R. V.; Schmit, L. A.
1982-01-01
A method is presented for the minimum mass design of three dimensional space frames constructed of thin walled rectangular cross-section members. Constraints on nodal displacements and rotations, material stress, local buckling, and cross sectional dimensions are included. A high quality separable approximate problem is formed in terms of the reciprocals of the four section properties of the frame element cross section, replacing all implicit functions with simplified explicit relations. The cross sectional dimensions are efficiently calculated without using multilevel techniques. Several test problems are solved, demonstrating that a series of approximate problem solutions converge rapidly to an optimal design.
An approximate method for calculating aircraft downwash on parachute trajectories
Strickland, J.H.
1989-01-01
An approximate method for calculating velocities induced by aircraft on parachute trajectories is presented herein. A simple system of quadrilateral vortex panels is used to model the aircraft wing and its wake. The purpose of this work is to provide a simple analytical tool which can be used to approximate the effect of aircraft-induced velocities on parachute performance. Performance issues such as turnover and wake recontact may be strongly influenced by velocities induced by the wake of the delivering aircraft, especially if the aircraft is maneuvering at the time of parachute deployment. 7 refs., 9 figs.
Source Localization using Stochastic Approximation and Least Squares Methods
Sahyoun, Samir S.; Djouadi, Seddik M.; Qi, Hairong; Drira, Anis
2009-03-05
This paper presents two approaches to locate the source of a chemical plume; Nonlinear Least Squares and Stochastic Approximation (SA) algorithms. Concentration levels of the chemical measured by special sensors are used to locate this source. Non-linear Least Squares technique is applied at different noise levels and compared with the localization using SA. For a noise corrupted data collected from a distributed set of chemical sensors, we show that SA methods are more efficient than Least Squares method. SA methods are often better at coping with noisy input information than other search methods.
An approximate method for residual stress calculation infunctionally graded materials
Becker, T.L.
1999-06-02
Thermal residual stresses in functionally graded materials(FGMs) arise primarily from nonlinear spatial variations in the thermalexpansion coefficient, but can be significantly adjusted by variations inmodulus. Thermoelastic analysis of FGMs is complicated by such modulusgradients. A class of problems for which thermal stress solutions formaterials with constant modulus can be used as a basis for approximationsfor FGMs is discussed. The size of the error in this approximation due togradients in elastic modulus is investigated. Analytical and finiteelement solutions for the thermal stresses in various FGM geometries arecompared to results from this approximate method. In a geometry ofpractical interest, a right cylinder graded along the z-axis, the errorfor a Ni-Al2O3 FGM was found to be within 15 percent for all gradientsconsidered. The form of the approximation makes it easier to identifydesirable types of spatial nonlinearity in expansion coefficient andvariations in modulus: this would allow the manipulation of the locationof compressive stresses.
Parallel iterative solvers and preconditioners using approximate hierarchical methods
Grama, A.; Kumar, V.; Sameh, A.
1996-12-31
In this paper, we report results of the performance, convergence, and accuracy of a parallel GMRES solver for Boundary Element Methods. The solver uses a hierarchical approximate matrix-vector product based on a hybrid Barnes-Hut / Fast Multipole Method. We study the impact of various accuracy parameters on the convergence and show that with minimal loss in accuracy, our solver yields significant speedups. We demonstrate the excellent parallel efficiency and scalability of our solver. The combined speedups from approximation and parallelism represent an improvement of several orders in solution time. We also develop fast and paralellizable preconditioners for this problem. We report on the performance of an inner-outer scheme and a preconditioner based on truncated Green`s function. Experimental results on a 256 processor Cray T3D are presented.
A multiscale two-point flux-approximation method
Møyner, Olav Lie, Knut-Andreas
2014-10-15
A large number of multiscale finite-volume methods have been developed over the past decade to compute conservative approximations to multiphase flow problems in heterogeneous porous media. In particular, several iterative and algebraic multiscale frameworks that seek to reduce the fine-scale residual towards machine precision have been presented. Common for all such methods is that they rely on a compatible primal–dual coarse partition, which makes it challenging to extend them to stratigraphic and unstructured grids. Herein, we propose a general idea for how one can formulate multiscale finite-volume methods using only a primal coarse partition. To this end, we use two key ingredients that are computed numerically: (i) elementary functions that correspond to flow solutions used in transmissibility upscaling, and (ii) partition-of-unity functions used to combine elementary functions into basis functions. We exemplify the idea by deriving a multiscale two-point flux-approximation (MsTPFA) method, which is robust with regards to strong heterogeneities in the permeability field and can easily handle general grids with unstructured fine- and coarse-scale connections. The method can easily be adapted to arbitrary levels of coarsening, and can be used both as a standalone solver and as a preconditioner. Several numerical experiments are presented to demonstrate that the MsTPFA method can be used to solve elliptic pressure problems on a wide variety of geological models in a robust and efficient manner.
A comparative investigation on recurrence formulae in finite difference methods
NASA Astrophysics Data System (ADS)
Haberland, Christoph; Lahrmann, Andreas
1988-06-01
To solve the transient heat conduction equation, the Pade approximation is introduced into the finite-difference method. But if the time step is chosen too large relative to the element size, the Euler method and the Crank-Nicolson solution lead to significant oscillations. In contrast, the full implicit scheme does not show this oscillatory behavior, but is more inaccurate. Compared to these time-stepping algorithms the weighted-time-step method presented here is seen to offer definite advantages.
Sparse matrix approximation method for an active optical control system
NASA Astrophysics Data System (ADS)
Murphy, Timothy P.; Lyon, Richard G.; Dorband, John E.; Hollis, Jan M.
2001-12-01
We develop a sparse matrix approximation method to decompose a wave front into a basis set of actuator influence functions for an active optical system consisting of a deformable mirror and a segmented primary mirror. The wave front used is constructed by Zernike polynomials to simulate the output of a phase-retrieval algorithm. Results of a Monte Carlo simulation of the optical control loop are compared with the standard, nonsparse approach in terms of accuracy and precision, as well as computational speed and memory. The sparse matrix approximation method can yield more than a 50-fold increase in the speed and a 20-fold-reduction in matrix size and a commensurate decrease in required memory, with less than 10% degradation in solution accuracy. Our method is also shown to be better than when elements are selected for the sparse matrix on a magnitude basis alone. We show that the method developed is a viable alternative to use of the full control matrix in a phase-retrieval-based active optical control system.
Sparse matrix approximation method for an active optical control system.
Murphy, T P; Lyon, R G; Dorband, J E; Hollis, J M
2001-12-10
We develop a sparse matrix approximation method to decompose a wave front into a basis set of actuator influence functions for an active optical system consisting of a deformable mirror and a segmented primary mirror. The wave front used is constructed by Zernike polynomials to simulate the output of a phase-retrieval algorithm. Results of a Monte Carlo simulation of the optical control loop are compared with the standard, nonsparse approach in terms of accuracy and precision, as well as computational speed and memory. The sparse matrix approximation method can yield more than a 50-fold increase in the speed and a 20-fold reduction in matrix size and a commensurate decrease in required memory, with less than 10% degradation in solution accuracy. Our method is also shown to be better than when elements are selected for the sparse matrix on a magnitude basis alone. We show that the method developed is a viable alternative to use of the full control matrix in a phase-retrieval-based active optical control system. PMID:18364958
An Adaptive Derivative-based Method for Function Approximation
Tong, C
2008-10-22
To alleviate the high computational cost of large-scale multi-physics simulations to study the relationships between the model parameters and the outputs of interest, response surfaces are often used in place of the exact functional relationships. This report explores a method for response surface construction using adaptive sampling guided by derivative information at each selected sample point. This method is especially suitable for applications that can readily provide added information such as gradients and Hessian with respect to the input parameters under study. When higher order terms (third and above) in the Taylor series are negligible, the approximation error for this method can be controlled. We present details of the adaptive algorithm and numerical results on a few test problems.
Finite amplitude method for the quasiparticle random-phase approximation
NASA Astrophysics Data System (ADS)
Avogadro, Paolo; Nakatsukasa, Takashi
2011-07-01
We present the finite amplitude method (FAM), originally proposed in Ref. , for superfluid systems. A Hartree-Fock-Bogoliubov code may be transformed into a code of the quasiparticle-random-phase approximation (QRPA) with simple modifications. This technique has advantages over the conventional QRPA calculations, such as coding feasibility and computational cost. We perform the fully self-consistent linear-response calculation for the spherical neutron-rich nucleus 174Sn, modifying the hfbrad code, to demonstrate the accuracy, feasibility, and usefulness of the FAM.
An approximate method of analysis for notched unidirectional composites
NASA Technical Reports Server (NTRS)
Zweben, C.
1974-01-01
An approximate method is proposed for the analysis of unidirectional, filamentary composite materials having slit notches perpendicular to the fibers and subjected to tension parallel to the fibers. The approach is based on an engineering model which incorporates important effects of material heterogeneity by considering average extensional stresses in the fibers and average shear stresses in the matrix. Effects of interfacial failure and matrix plasticity at the root of the notch are considered. Predictions of the analysis are in reasonably good agreement with previous analytical models and experimental data for graphite/epoxy.
Proton Form Factor Measurements Using Polarization Method: Beyond Born Approximation
Pentchev, Lubomir
2008-10-13
Significant theoretical and experimental efforts have been made over the past 7 years aiming to explain the discrepancy between the proton form factor ratio data obtained at JLab using the polarization method and the previous Rosenbluth measurements. Preliminary results from the first high precision polarization experiment dedicated to study effects beyond Born approximation will be presented. The ratio of the transferred polarization components and, separately, the longitudinal polarization in ep elastic scattering have been measured at a fixed Q{sup 2} of 2.5 GeV{sup 2} over a wide kinematic range. The two quantities impose constraints on the real part of the ep elastic amplitudes.
Optical properties of electrohydrodynamic convection patterns: rigorous and approximate methods
NASA Astrophysics Data System (ADS)
Bohley, Christian; Heuer, Jana; Stannarius, Ralf
2005-12-01
We analyze the optical behavior of two-dimensionally periodic structures that occur in electrohydrodynamic convection (EHC) patterns in nematic sandwich cells. These structures are anisotropic, locally uniaxial, and periodic on the scale of micrometers. For the first time, the optics of these structures is investigated with a rigorous method. The method used for the description of the electromagnetic waves interacting with EHC director patterns is a numerical approach that discretizes directly the Maxwell equations. It works as a space-grid-time-domain method and computes electric and magnetic fields in time steps. This so-called finite-difference-time-domain (FDTD) method is able to generate the fields with arbitrary accuracy. We compare this rigorous method with earlier attempts based on ray-tracing and analytical approximations. Results of optical studies of EHC structures made earlier based on ray-tracing methods are confirmed for thin cells, when the spatial periods of the pattern are sufficiently large. For the treatment of small-scale convection structures, the FDTD method is without alternatives.
Parabolic approximation method for the mode conversion-tunneling equation
Phillips, C.K.; Colestock, P.L.; Hwang, D.Q.; Swanson, D.G.
1987-07-01
The derivation of the wave equation which governs ICRF wave propagation, absorption, and mode conversion within the kinetic layer in tokamaks has been extended to include diffraction and focussing effects associated with the finite transverse dimensions of the incident wavefronts. The kinetic layer considered consists of a uniform density, uniform temperature slab model in which the equilibrium magnetic field is oriented in the z-direction and varies linearly in the x-direction. An equivalent dielectric tensor as well as a two-dimensional energy conservation equation are derived from the linearized Vlasov-Maxwell system of equations. The generalized form of the mode conversion-tunneling equation is then extracted from the Maxwell equations, using the parabolic approximation method in which transverse variations of the wave fields are assumed to be weak in comparison to the variations in the primary direction of propagation. Methods of solving the generalized wave equation are discussed. 16 refs.
Approximation method to compute domain related integrals in structural studies
NASA Astrophysics Data System (ADS)
Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.
2015-11-01
Various engineering calculi use integral calculus in theoretical models, i.e. analytical and numerical models. For usual problems, integrals have mathematical exact solutions. If the domain of integration is complicated, there may be used several methods to calculate the integral. The first idea is to divide the domain in smaller sub-domains for which there are direct calculus relations, i.e. in strength of materials the bending moment may be computed in some discrete points using the graphical integration of the shear force diagram, which usually has a simple shape. Another example is in mathematics, where the surface of a subgraph may be approximated by a set of rectangles or trapezoids used to calculate the definite integral. The goal of the work is to introduce our studies about the calculus of the integrals in the transverse section domains, computer aided solutions and a generalizing method. The aim of our research is to create general computer based methods to execute the calculi in structural studies. Thus, we define a Boolean algebra which operates with ‘simple’ shape domains. This algebraic standpoint uses addition and subtraction, conditioned by the sign of every ‘simple’ shape (-1 for the shapes to be subtracted). By ‘simple’ shape or ‘basic’ shape we define either shapes for which there are direct calculus relations, or domains for which their frontiers are approximated by known functions and the according calculus is carried out using an algorithm. The ‘basic’ shapes are linked to the calculus of the most significant stresses in the section, refined aspect which needs special attention. Starting from this idea, in the libraries of ‘basic’ shapes, there were included rectangles, ellipses and domains whose frontiers are approximated by spline functions. The domain triangularization methods suggested that another ‘basic’ shape to be considered is the triangle. The subsequent phase was to deduce the exact relations for the
Atomistic Modeling of Nanostructures via the BFS Quantum Approximate Method
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Garces, Jorge E.; Noebe, Ronald D.; Farias, D.
2003-01-01
Ideally, computational modeling techniques for nanoscopic physics would be able to perform free of limitations on the type and number of elements, while providing comparable accuracy when dealing with bulk or surface problems. Computational efficiency is also desirable, if not mandatory, for properly dealing with the complexity of typical nano-strucured systems. A quantum approximate technique, the BFS method for alloys, which attempts to meet these demands, is introduced for the calculation of the energetics of nanostructures. The versatility of the technique is demonstrated through analysis of diverse systems, including multi-phase precipitation in a five element Ni-Al-Ti-Cr-Cu alloy and the formation of mixed composition Co-Cu islands on a metallic Cu(III) substrate.
Multivariate approximation methods and applications to geophysics and geodesy
NASA Technical Reports Server (NTRS)
Munteanu, M. J.
1979-01-01
The first report in a series is presented which is intended to be written by the author with the purpose of treating a class of approximation methods of functions in one and several variables and ways of applying them to geophysics and geodesy. The first report is divided in three parts and is devoted to the presentation of the mathematical theory and formulas. Various optimal ways of representing functions in one and several variables and the associated error when information is had about the function such as satellite data of different kinds are discussed. The framework chosen is Hilbert spaces. Experiments were performed on satellite altimeter data and on satellite to satellite tracking data.
An Approximate Matching Method for Clinical Drug Names
Peters, Lee; Kapusnik-Uner, Joan E.; Nguyen, Thang; Bodenreider, Olivier
2011-01-01
Objective: To develop an approximate matching method for finding the closest drug names within existing RxNorm content for drug name variants found in local drug formularies. Methods: We used a drug-centric algorithm to determine the closest strings between the RxNorm data set and local variants which failed the exact and normalized string matching searches. Aggressive measures such as token splitting, drug name expansion and spelling correction are used to try and resolve drug names. The algorithm is evaluated against three sets containing a total of 17,164 drug name variants. Results: Mapping of the local variant drug names to the targeted concept descriptions ranged from 83.8% to 92.8% in three test sets. The algorithm identified the appropriate RxNorm concepts as the top candidate in 76.8%, 67.9% and 84.8% of the cases in the three test sets and among the top three candidates in 90–96% of the cases. Conclusion: Using a drug-centric token matching approach with aggressive measures to resolve unknown names provides effective mappings to clinical drug names and has the potential of facilitating the work of drug terminology experts in mapping local formularies to reference terminologies. PMID:22195172
Uncertainty analysis using corrected first-order approximation method
NASA Astrophysics Data System (ADS)
Tyagi, Aditya; Haan, C. T.
2001-06-01
Application of uncertainty and reliability analysis is an essential part of many problems related to modeling and decision making in the area of environmental and water resources engineering. Computational efficiency, understandability, and easier application have made the first-order approximation (FOA) method a favored tool for uncertainty analysis. In many instances, situations may arise where the accuracy of FOA estimates becomes questionable. Often the FOA application is considered acceptable if the coefficient of variation of the uncertain parameter(s) is <0.2, but this criterion is not correct in all the situations. Analytical as well as graphical relations for relative error are developed and presented for a generic power function that can be used as a guide for judging the suitability of the FOA for a specified acceptable error of estimation. Further, these analytical and graphical relations enable FOA estimates for means and variances of model components to be corrected to their true values. Using these corrected values of means and variances for model components, one can determine the exact values of the mean and variance of an output random variable. This technique is applicable when an output variable is a function of several independent random variables in multiplicative, additive, or in combined (multiplicative and additive) forms. Two examples are given to demonstrate the application of the technique.
Communication: Improved pair approximations in local coupled-cluster methods
Schwilk, Max; Werner, Hans-Joachim; Usvyat, Denis
2015-03-28
In local coupled cluster treatments the electron pairs can be classified according to the magnitude of their energy contributions or distances into strong, close, weak, and distant pairs. Different approximations are introduced for the latter three classes. In this communication, an improved simplified treatment of close and weak pairs is proposed, which is based on long-range cancellations of individually slowly decaying contributions in the amplitude equations. Benchmark calculations for correlation, reaction, and activation energies demonstrate that these approximations work extremely well, while pair approximations based on local second-order Møller-Plesset theory can lead to errors that are 1-2 orders of magnitude larger.
A comparison of computational methods and algorithms for the complex gamma function
NASA Technical Reports Server (NTRS)
Ng, E. W.
1974-01-01
A survey and comparison of some computational methods and algorithms for gamma and log-gamma functions of complex arguments are presented. Methods and algorithms reported include Chebyshev approximations, Pade expansion and Stirling's asymptotic series. The comparison leads to the conclusion that Algorithm 421 published in the Communications of ACM by H. Kuki is the best program either for individual application or for the inclusion in subroutine libraries.
Importance Sampling Approach for the Nonstationary Approximation Error Method
NASA Astrophysics Data System (ADS)
Huttunen, J. M. J.; Lehikoinen, A.; Hämäläinen, J.; Kaipio, J. P.
2010-09-01
The approximation error approach has been earlier proposed to handle modelling, numerical and computational errors in inverse problems. The idea of the approach is to include the errors to the forward model and compute the approximate statistics of the errors using Monte Carlo sampling. This can be a computationally tedious task but the key property of the approach is that the approximate statistics can be calculated off-line before measurement process takes place. In nonstationary problems, however, information is accumulated over time, and the initial uncertainties may turn out to have been exaggerated. In this paper, we propose an importance weighing algorithm with which the approximation error statistics can be updated during the accumulation of measurement information. As a computational example, we study an estimation problem that is related to a convection-diffusion problem in which the velocity field is not accurately specified.
A method of approximating range size of small mammals
Stickel, L.F.
1965-01-01
In summary, trap success trends appear to provide a useful approximation to range size of easily trapped small mammals such as Peromyscus. The scale of measurement can be adjusted as desired. Further explorations of the usefulness of the plan should be made and modifications possibly developed before adoption.
Stochastic Approximation Methods for Latent Regression Item Response Models
ERIC Educational Resources Information Center
von Davier, Matthias; Sinharay, Sandip
2010-01-01
This article presents an application of a stochastic approximation expectation maximization (EM) algorithm using a Metropolis-Hastings (MH) sampler to estimate the parameters of an item response latent regression model. Latent regression item response models are extensions of item response theory (IRT) to a latent variable model with covariates…
Approximate Green's function methods for HZE transport in multilayered materials
NASA Technical Reports Server (NTRS)
Wilson, John W.; Badavi, Francis F.; Shinn, Judy L.; Costen, Robert C.
1993-01-01
A nonperturbative analytic solution of the high charge and energy (HZE) Green's function is used to implement a computer code for laboratory ion beam transport in multilayered materials. The code is established to operate on the Langley nuclear fragmentation model used in engineering applications. Computational procedures are established to generate linear energy transfer (LET) distributions for a specified ion beam and target for comparison with experimental measurements. The code was found to be highly efficient and compared well with the perturbation approximation.
Approximation of the transport equation by a weighted particle method
Mas-Gallic, S.; Poupaud, F.
1988-08-01
We study a particle method for numerically solving a model equation for the neutron transport. We present the method and develop the theoretical convergence analysis. We prove the stability and the convergence of the method in L/sup infinity/. Some computational test results are given.
A novel method to approximate structured stability radii
NASA Astrophysics Data System (ADS)
Guglielmi, Nicola; Manetta, Manuela
2013-10-01
The unstructured stability radius of a Hurwitz matrix A (i.e. a matrix whose eigenvalues have strictly negative real part) is the smallest norm of a complex perturbation E such that A + E is not Hurwitz, which means it has at least an eigenvalue with zero real part. Such a measure is a more robust stability indicator with respect to the spectral abscissa and is much studied in the literature. However, when the matrix A has a structure (for example the matrix is real or has a prescribed sparsity pattern), it would be more meaningful to look for the smallest destabilizing perturbation E with the same structure. This problem turns out to be more difficult and in some cases unresolved. We propose here a new methodology to compute approximations of the structured stability radii, focusing our attention on real and pattern-structured stability radii.
SET: a pupil detection method using sinusoidal approximation.
Javadi, Amir-Homayoun; Hakimi, Zahra; Barati, Morteza; Walsh, Vincent; Tcheang, Lili
2015-01-01
Mobile eye-tracking in external environments remains challenging, despite recent advances in eye-tracking software and hardware engineering. Many current methods fail to deal with the vast range of outdoor lighting conditions and the speed at which these can change. This confines experiments to artificial environments where conditions must be tightly controlled. Additionally, the emergence of low-cost eye tracking devices calls for the development of analysis tools that enable non-technical researchers to process the output of their images. We have developed a fast and accurate method (known as "SET") that is suitable even for natural environments with uncontrolled, dynamic and even extreme lighting conditions. We compared the performance of SET with that of two open-source alternatives by processing two collections of eye images: images of natural outdoor scenes with extreme lighting variations ("Natural"); and images of less challenging indoor scenes ("CASIA-Iris-Thousand"). We show that SET excelled in outdoor conditions and was faster, without significant loss of accuracy, indoors. SET offers a low cost eye-tracking solution, delivering high performance even in challenging outdoor environments. It is offered through an open-source MATLAB toolkit as well as a dynamic-link library ("DLL"), which can be imported into many programming languages including C# and Visual Basic in Windows OS (www.eyegoeyetracker.co.uk). PMID:25914641
SET: a pupil detection method using sinusoidal approximation
Javadi, Amir-Homayoun; Hakimi, Zahra; Barati, Morteza; Walsh, Vincent; Tcheang, Lili
2015-01-01
Mobile eye-tracking in external environments remains challenging, despite recent advances in eye-tracking software and hardware engineering. Many current methods fail to deal with the vast range of outdoor lighting conditions and the speed at which these can change. This confines experiments to artificial environments where conditions must be tightly controlled. Additionally, the emergence of low-cost eye tracking devices calls for the development of analysis tools that enable non-technical researchers to process the output of their images. We have developed a fast and accurate method (known as “SET”) that is suitable even for natural environments with uncontrolled, dynamic and even extreme lighting conditions. We compared the performance of SET with that of two open-source alternatives by processing two collections of eye images: images of natural outdoor scenes with extreme lighting variations (“Natural”); and images of less challenging indoor scenes (“CASIA-Iris-Thousand”). We show that SET excelled in outdoor conditions and was faster, without significant loss of accuracy, indoors. SET offers a low cost eye-tracking solution, delivering high performance even in challenging outdoor environments. It is offered through an open-source MATLAB toolkit as well as a dynamic-link library (“DLL”), which can be imported into many programming languages including C# and Visual Basic in Windows OS (www.eyegoeyetracker.co.uk). PMID:25914641
Lubrication approximation in completed double layer boundary element method
NASA Astrophysics Data System (ADS)
Nasseri, S.; Phan-Thien, N.; Fan, X.-J.
This paper reports on the results of the numerical simulation of the motion of solid spherical particles in shear Stokes flows. Using the completed double layer boundary element method (CDLBEM) via distributed computing under Parallel Virtual Machine (PVM), the effective viscosity of suspension has been calculated for a finite number of spheres in a cubic array, or in a random configuration. In the simulation presented here, the short range interactions via lubrication forces are also taken into account, via the range completer in the formulation, whenever the gap between two neighbouring particles is closer than a critical gap. The results for particles in a simple cubic array agree with the results of Nunan and Keller (1984) and Stoksian Dynamics of Brady etal. (1988). To evaluate the lubrication forces between particles in a random configuration, a critical gap of 0.2 of particle's radius is suggested and the results are tested against the experimental data of Thomas (1965) and empirical equation of Krieger-Dougherty (Krieger, 1972). Finally, the quasi-steady trajectories are obtained for time-varying configuration of 125 particles.
NASA Astrophysics Data System (ADS)
Pospelov, A. I.
2016-08-01
Adaptive methods for the polyhedral approximation of the convex Edgeworth-Pareto hull in multiobjective monotone integer optimization problems are proposed and studied. For these methods, theoretical convergence rate estimates with respect to the number of vertices are obtained. The estimates coincide in order with those for filling and augmentation H-methods intended for the approximation of nonsmooth convex compact bodies.
Assessment of presentation methods for ReFace computerized facial approximations.
Richard, Adam H; Parks, Connie L; Monson, Keith L
2014-09-01
Facial approximations (whether clay sculptures, sketches, or computer-generated) can be presented to the public in a variety of layouts, but there are currently no clear indicators as to what style of presentation is most effective at eliciting recognition. The primary purpose of this study is to determine which of five presentation methods produces the most favorable recognition results. A secondary goal of the research is to evaluate a new method for assessing the accuracy of facial approximations. Previous studies have evaluated facial approximation effectiveness using standards similar to studies of eyewitness identification in which a single, definitive choice must be made by the research participant. These criteria seem inappropriate given that facial approximation is strictly an investigative tool to help narrow the search for potential matching candidates in the process of identification. Results from the study showed a higher performance for methods utilizing more than one image of the approximation, but which specific method performed best varied among approximation subjects. Also, results for all five presentation methods showed that, when given the opportunity to select more than one approximation, participants were consistently better at identifying the correct approximation as one of a few possible matches to the missing person than they were at singling out the correct approximation. This suggests that facial approximations have perhaps been undervalued as investigative tools in previous research. PMID:25128751
The complex variable boundary element method: Applications in determining approximative boundaries
Hromadka, T.V.
1984-01-01
The complex variable boundary element method (CVBEM) is used to determine approximation functions for boundary value problems of the Laplace equation such as occurs in potential theory. By determining an approximative boundary upon which the CVBEM approximator matches the desired constant (level curves) boundary conditions, the CVBEM is found to provide the exact solution throughout the interior of the transformed problem domain. Thus, the acceptability of the CVBEM approximation is determined by the closeness-of-fit of the approximative boundary to the study problem boundary. ?? 1984.
An approximate method for sonic fatigue analysis of plates and shells
NASA Astrophysics Data System (ADS)
Blevins, R. D.
1989-02-01
Approximate analytical methods are developed for determining the response of plate and shell structures to coherent sound fields. The methods are based on separating the spatial and temporal aspects of the problem and then developing approximations for both. Direct comparison is made with experimental data.
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Hornby, Gregory; Ishihara, Abe
2013-01-01
This paper describes two methods of trajectory optimization to obtain an optimal trajectory of minimum-fuel- to-climb for an aircraft. The first method is based on the adjoint method, and the second method is based on a direct trajectory optimization method using a Chebyshev polynomial approximation and cubic spine approximation. The approximate optimal trajectory will be compared with the adjoint-based optimal trajectory which is considered as the true optimal solution of the trajectory optimization problem. The adjoint-based optimization problem leads to a singular optimal control solution which results in a bang-singular-bang optimal control.
Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method
NASA Technical Reports Server (NTRS)
Kowal, Michael T.
1997-01-01
The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.
NASA Technical Reports Server (NTRS)
Fadel, G. M.
1991-01-01
The point exponential approximation method was introduced by Fadel et al. (Fadel, 1990), and tested on structural optimization problems with stress and displacement constraints. The reports in earlier papers were promising, and the method, which consists of correcting Taylor series approximations using previous design history, is tested in this paper on optimization problems with frequency constraints. The aim of the research is to verify the robustness and speed of convergence of the two point exponential approximation method when highly non-linear constraints are used.
Extension of the weak-line approximation and application to correlated-k methods
Conley, A.J.; Collins, W.D.
2011-03-15
Global climate models require accurate and rapid computation of the radiative transfer through the atmosphere. Correlated-k methods are often used. One of the approximations used in correlated-k models is the weakline approximation. We introduce an approximation T/sub g/ which reduces to the weak-line limit when optical depths are small, and captures the deviation from the weak-line limit as the extinction deviates from the weak-line limit. This approximation is constructed to match the first two moments of the gamma distribution to the k-distribution of the transmission. We compare the errors of the weak-line approximation with T/sub g/ in the context of a water vapor spectrum. The extension T/sub g/ is more accurate and converges more rapidly than the weak-line approximation.
Test particle propagation in magnetostatic turbulence. 2: The local approximation method
NASA Technical Reports Server (NTRS)
Klimas, A. J.; Sandri, G.; Scudder, J. D.; Howell, D. R.
1976-01-01
An approximation method for statistical mechanics is presented and applied to a class of problems which contains a test particle propagation problem. All of the available basic equations used in statistical mechanics are cast in the form of a single equation which is integrodifferential in time and which is then used as the starting point for the construction of the local approximation method. Simplification of the integrodifferential equation is achieved through approximation to the Laplace transform of its kernel. The approximation is valid near the origin in the Laplace space and is based on the assumption of small Laplace variable. No other small parameter is necessary for the construction of this approximation method. The n'th level of approximation is constructed formally, and the first five levels of approximation are calculated explicitly. It is shown that each level of approximation is governed by an inhomogeneous partial differential equation in time with time independent operator coefficients. The order in time of these partial differential equations is found to increase as n does. At n = 0 the most local first order partial differential equation which governs the Markovian limit is regained.
Efficiency of the estimate refinement method for polyhedral approximation of multidimensional balls
NASA Astrophysics Data System (ADS)
Kamenev, G. K.
2016-05-01
The estimate refinement method for the polyhedral approximation of convex compact bodies is analyzed. When applied to convex bodies with a smooth boundary, this method is known to generate polytopes with an optimal order of growth of the number of vertices and facets depending on the approximation error. In previous studies, for the approximation of a multidimensional ball, the convergence rates of the method were estimated in terms of the number of faces of all dimensions and the cardinality of the facial structure (the norm of the f-vector) of the constructed polytope was shown to have an optimal rate of growth. In this paper, the asymptotic convergence rate of the method with respect to faces of all dimensions is compared with the convergence rate of best approximation polytopes. Explicit expressions are obtained for the asymptotic efficiency, including the case of low dimensions. Theoretical estimates are compared with numerical results.
Approximate methods for predicting interlaminar shear stiffness of laminated and sandwich beams
NASA Astrophysics Data System (ADS)
Roy, Ajit K.; Verchery, Georges
1993-01-01
Several approximate closed form expressions exist in the literature for predicting the effective interlaminar shear stiffness (G13) of laminated composite beams. The accuracy of these approximate methods depends on the number of layers present in the laminated beam, the relative layer thickness and layer stacking sequence, and the beam length to depth ratio. The objective of this work is to evaluate approximate methods for predicting G13 by comparing its predictions with that of an accurate method, and then find the range where the simple closed form expressions for predicting G13 can be applicable. A comparative study indicates that all the approximate methods included here give good prediction of G13 when the laminate is made of a large number of repeated sublaminates. Further, the parabolic shear stress distribution function yields a reasonably accurate prediction of G13 even for a relatively small number of layers in the laminate. A similar result is also presented for sandwich beams.
Evaluation of the successive approximations method for acoustic streaming numerical simulations.
Catarino, S O; Minas, G; Miranda, J M
2016-05-01
This work evaluates the successive approximations method commonly used to predict acoustic streaming by comparing it with a direct method. The successive approximations method solves both the acoustic wave propagation and acoustic streaming by solving the first and second order Navier-Stokes equations, ignoring the first order convective effects. This method was applied to acoustic streaming in a 2D domain and the results were compared with results from the direct simulation of the Navier-Stokes equations. The velocity results showed qualitative agreement between both methods, which indicates that the successive approximations method can describe the formation of flows with recirculation. However, a large quantitative deviation was observed between the two methods. Further analysis showed that the successive approximation method solution is sensitive to the initial flow field. The direct method showed that the instantaneous flow field changes significantly due to reflections and wave interference. It was also found that convective effects contribute significantly to the wave propagation pattern. These effects must be taken into account when solving the acoustic streaming problems, since it affects the global flow. By adequately calculating the initial condition for first order step, the acoustic streaming prediction by the successive approximations method can be improved significantly.
Evaluation of the successive approximations method for acoustic streaming numerical simulations.
Catarino, S O; Minas, G; Miranda, J M
2016-05-01
This work evaluates the successive approximations method commonly used to predict acoustic streaming by comparing it with a direct method. The successive approximations method solves both the acoustic wave propagation and acoustic streaming by solving the first and second order Navier-Stokes equations, ignoring the first order convective effects. This method was applied to acoustic streaming in a 2D domain and the results were compared with results from the direct simulation of the Navier-Stokes equations. The velocity results showed qualitative agreement between both methods, which indicates that the successive approximations method can describe the formation of flows with recirculation. However, a large quantitative deviation was observed between the two methods. Further analysis showed that the successive approximation method solution is sensitive to the initial flow field. The direct method showed that the instantaneous flow field changes significantly due to reflections and wave interference. It was also found that convective effects contribute significantly to the wave propagation pattern. These effects must be taken into account when solving the acoustic streaming problems, since it affects the global flow. By adequately calculating the initial condition for first order step, the acoustic streaming prediction by the successive approximations method can be improved significantly. PMID:27250122
NASA Astrophysics Data System (ADS)
Loke, V. L. Y.; Huda, G. M.; Donev, E. U.; Schmidt, V.; Hastings, J. T.; Mengüç, M. Pinar; Wriedt, T.
2014-05-01
We investigate the plasmonic response of gold nanospheres calculated using discrete dipole approximation validated against the results from other discretization methods, namely the finite-difference time-domain method and the finite-element methods. Comparisons are also made with calculations from analytical methods such as the Mie solution and the null-field method with discrete sources. We consider the nanoparticle interacting with the incident field both in free space and sitting on a planar substrate. In the latter case, discrete dipole approximation with surface interaction is used; this includes the interaction with the `image dipoles' using Sommerfeld integration.
Căruntu, Bogdan
2014-01-01
The paper presents the optimal homotopy perturbation method, which is a new method to find approximate analytical solutions for nonlinear partial differential equations. Based on the well-known homotopy perturbation method, the optimal homotopy perturbation method presents an accelerated convergence compared to the regular homotopy perturbation method. The applications presented emphasize the high accuracy of the method by means of a comparison with previous results. PMID:25003150
Using the Bollen-Stine Bootstrapping Method for Evaluating Approximate Fit Indices
Kim, Hanjoe; Millsap, Roger
2014-01-01
Accepting that a model will not exactly fit any empirical data, global approximate fit indices quantify the degree of misfit. Recent research (Chen et al., 2008) has shown that using fixed conventional cut-points for approximate fit indices can lead to decision errors. Instead of using fixed cut-points for evaluating approximate fit indices, this study focuses on the meaning of approximate fit and introduces a new method to evaluate approximate fit indices. Millsap (2012) introduced a simulation-based method to evaluate approximate fit indices. A limitation of Millsap’s work was that a rather strong assumption of multivariate normality was implied in generating simulation data. In this study, the Bollen-Stine bootstrapping procedure (Bollen & Stine, 1993) is proposed to supplement the former study. When data are non-normal, the conclusions derived from Millsap’s (2012) simulation method and the Bollen-Stine method can differ. Examples are given to illustrate the use of the Bollen-Stine bootstrapping procedure for evaluating RMSEA. Comparisons are made with the simulation method. The results are discussed, and suggestions are given for the use of proposed method. PMID:25558095
NASA Astrophysics Data System (ADS)
Bota, C.; Cǎruntu, B.; Bundǎu, O.
2013-10-01
In this paper we applied the Squared Remainder Minimization Method (SRMM) to find analytic approximate polynomial solutions for Riccati differential equations. Two examples are included to demonstrated the validity and applicability of the method. The results are compared to those obtained by other methods.
NASA Astrophysics Data System (ADS)
Wang, Li; Sun, Xiaogang; Xing, Jian
2012-04-01
Retrieval of spheroidal particle size distribution using an approximate method in spectral extinction technique is proposed. The combined approximate method, which is the combination of Mie method and generalized eikonal approximation (GEA) method, is used as an alternative to the rigorous solutions to calculate the averaging extinction efficiency of spheroid. Based on the averaging extinction efficiency, the accuracy and limitations of the retrieval are then investigated. Moreover, the validity range and effect of the refractive index are also examined. The Johnson's SB function in this paper is used as a versatile function to fit the commonly used particle size distribution functions in the dependent model. Simulations and experimental results show that the combined approximate method can be successfully applied to retrieval of spheroidal particle size distribution. In certain constraint conditions, the retrieval results demonstrate the high reliability and stability of the method. By using the combined approximate method, the complexity and computation time of the retrieval are significantly reduced, which is more suitable for quick and easy measurement. The method can also be used as a replacement when the rigorous solutions suffer computationally intractable difficulties.
Analytical approximate solution of the cooling problem by Adomian decomposition method
NASA Astrophysics Data System (ADS)
Alizadeh, Ebrahim; Sedighi, Kurosh; Farhadi, Mousa; Ebrahimi-Kebria, H. R.
2009-02-01
The Adomian decomposition method (ADM) can provide analytical approximation or approximated solution to a rather wide class of nonlinear (and stochastic) equations without linearization, perturbation, closure approximation, or discretization methods. In the present work, ADM is employed to solve the momentum and energy equations for laminar boundary layer flow over flat plate at zero incidences with neglecting the frictional heating. A trial and error strategy has been used to obtain the constant coefficient in the approximated solution. ADM provides an analytical solution in the form of an infinite power series. The effect of Adomian polynomial terms is considered and shows that the accuracy of results is increased with the increasing of Adomian polynomial terms. The velocity and thermal profiles on the boundary layer are calculated. Also the effect of the Prandtl number on the thermal boundary layer is obtained. Results show ADM can solve the nonlinear differential equations with negligible error compared to the exact solution.
Development of Approximate Methods for the Analysis of Patch Damping Design Concepts
NASA Astrophysics Data System (ADS)
Kung, S.-W.; Singh, R.
1999-02-01
This paper develops three approximate methods for the analysis of patch damping designs. Undamped natural frequencies and modal loss factors are calculated using the Rayleigh energy method and modal strain energy technique, respectively, without explicitly solving high order differential equations or complex eigenvalue problems. Approximate Method I is developed for sandwich beams assuming that damped mode shapes are given by the Euler beam eigenfunctions. The superposition principal is then used to accommodate any arbitrary mode shape, which may be obtained from modal experiments or the finite element method. In Method II, the formulation is further simplified with the assumption of a very compliant viscoelastic core. Finally, Method III considers a compact patch problem. The modal loss factor is then expressed as a product of terms related to material properties, layer thickness, patch size and patch performance. Approximate Methods II and III are also extended to rectangular plates. Formulations are verified by conducting analogous modal measurements and by comparing predictions with those obtained using the Rayleigh-Ritz method (without making any of the above mentioned assumptions). Several example cases are presented to demonstrate the validity and utility of approximate methods for patch damping design concepts.
İbiş, Birol
2014-01-01
This paper aims to obtain the approximate solution of time-fractional advection-dispersion equation (FADE) involving Jumarie's modification of Riemann-Liouville derivative by the fractional variational iteration method (FVIM). FVIM provides an analytical approximate solution in the form of a convergent series. Some examples are given and the results indicate that the FVIM is of high accuracy, more efficient, and more convenient for solving time FADEs. PMID:24578662
Archibald, Richard K; Deiterding, Ralf; Hauck, Cory D; Jakeman, John D; Xiu, Dongbin
2012-01-01
We have develop a fast method that can capture piecewise smooth functions in high dimensions with high order and low computational cost. This method can be used for both approximation and error estimation of stochastic simulations where the computations can either be guided or come from a legacy database.
Numerical solution of 2D-vector tomography problem using the method of approximate inverse
NASA Astrophysics Data System (ADS)
Svetov, Ivan; Maltseva, Svetlana; Polyakova, Anna
2016-08-01
We propose a numerical solution of reconstruction problem of a two-dimensional vector field in a unit disk from the known values of the longitudinal and transverse ray transforms. The algorithm is based on the method of approximate inverse. Numerical simulations confirm that the proposed method yields good results of reconstruction of vector fields.
Sato, Y; Yamada, T; Matsumoto, M; Wakitani, Y; Hasegawa, T; Yoshimura, T; Murayama, H; Oda, K; Sato, T; Unno, Y; Yunoki, A
2012-09-01
A tritium radioactivity source was measured by triple-to-double coincidence ratio (TDCR) equipment of the National Metrology Institute of Japan (NMIJ), and measured data were fitted using polynomial approximation and the Newton-Raphson method, a technique whereby equations are solved numerically by successive approximations. The method used to obtain the activity minimizes the difference between statistically calculated data and experimental data. In the fitting, since calculated statistical efficiency and TDCR values are discrete, the calculated efficiencies are approximated by quadratic functions around experimental values and the Newton-Raphson method is used for convergence at the minimal difference between experimental data and calculated data. In this way, the activity of tritium was successfully obtained.
Approximation methods for control of structural acoustics models with piezoceramic actuators
NASA Technical Reports Server (NTRS)
Banks, H. T.; Fang, W.; Silcox, R. J.; Smith, R. C.
1993-01-01
The active control of acoustic pressure in a 2-D cavity with a flexible boundary (a beam) is considered. Specifically, this control is implemented via piezoceramic patches on the beam which produces pure bending moments. The incorporation of the feedback control in this manner leads to a system with an unbounded input term. Approximation methods in this manner leads to a system with an unbounded input term. Approximation methods in this manner leads to a system with an unbounded input team. Approximation methods in the context of linear quadratic regulator (LQR) state space control formulation are discussed and numerical results demonstrating the effectiveness of this approach in computing feedback controls for noise reduction are presented.
NASA Technical Reports Server (NTRS)
Connor, J. N. L.; Curtis, P. R.; Farrelly, D.
1984-01-01
Methods that can be used in the numerical implementation of the uniform swallowtail approximation are described. An explicit expression for that approximation is presented to the lowest order, showing that there are three problems which must be overcome in practice before the approximation can be applied to any given problem. It is shown that a recently developed quadrature method can be used for the accurate numerical evaluation of the swallowtail canonical integral and its partial derivatives. Isometric plots of these are presented to illustrate some of their properties. The problem of obtaining the arguments of the swallowtail integral from an analytical function of its argument is considered, describing two methods of solving this problem. The asymptotic evaluation of the butterfly canonical integral is addressed.
Quantum Approximate Methods for the Atomistic Modeling of Multicomponent Alloys. Chapter 7
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Garces, Jorge; Mosca, Hugo; Gargano, pablo; Noebe, Ronald D.; Abel, Phillip
2007-01-01
This chapter describes the role of quantum approximate methods in the understanding of complex multicomponent alloys at the atomic level. The need to accelerate materials design programs based on economical and efficient modeling techniques provides the framework for the introduction of approximations and simplifications in otherwise rigorous theoretical schemes. As a promising example of the role that such approximate methods might have in the development of complex systems, the BFS method for alloys is presented and applied to Ru-rich Ni-base superalloys and also to the NiAI(Ti,Cu) system, highlighting the benefits that can be obtained from introducing simple modeling techniques to the investigation of such complex systems.
Approximate two layer (inviscid/viscous) methods to model aerothermodynamic environments
NASA Technical Reports Server (NTRS)
Dejarnette, Fred R.
1992-01-01
Approximate inviscid and boundary layer techniques for aerodynamic heating calculations are discussed. An inviscid flowfield solution is needed to provide surface pressures and boundary-layer edge properties. Modified Newtonian pressures coupled with an approximate shock shape will suffice for relatively simple shapes like sphere-cones with cone half-angles between 15 and 45 deg. More accurate approximate methods have been developed which make use of modified Maslen techniques. Slender and large angle sphere-cones and more complex shapes generally require an Euler code, like HALIS, to provide that information. The boundary-layer solution is reduced significantly by using the axisymmetric analog and approximate heating relations developed by Zoby, et al. (1981). Analysis is presented for the calculation of inviscid surface streamlines and metrics. Entropy-layer swallowing effects require coupling the inviscid and boundary-layer solutions.
GPGCD, an Iterative Method for Calculating Approximate GCD, for Multiple Univariate Polynomials
NASA Astrophysics Data System (ADS)
Terui, Akira
We present an extension of our GPGCD method, an iterative method for calculating approximate greatest common divisor (GCD) of univariate polynomials, to multiple polynomial inputs. For a given pair of polynomials and a degree, our algorithm finds a pair of polynomials which has a GCD of the given degree and whose coefficients are perturbed from those in the original inputs, making the perturbations as small as possible, along with the GCD. In our GPGCD method, the problem of approximate GCD is transferred to a constrained minimization problem, then solved with the so-called modified Newton method, which is a generalization of the gradient-projection method, by searching the solution iteratively. In this paper, we extend our method to accept more than two polynomials with the real coefficients as an input.
Laplace transform homotopy perturbation method for the approximation of variational problems.
Filobello-Nino, U; Vazquez-Leal, H; Rashidi, M M; Sedighi, H M; Perez-Sesma, A; Sandoval-Hernandez, M; Sarmiento-Reyes, A; Contreras-Hernandez, A D; Pereyra-Diaz, D; Hoyos-Reyes, C; Jimenez-Fernandez, V M; Huerta-Chua, J; Castro-Gonzalez, F; Laguna-Camacho, J R
2016-01-01
This article proposes the application of Laplace Transform-Homotopy Perturbation Method and some of its modifications in order to find analytical approximate solutions for the linear and nonlinear differential equations which arise from some variational problems. As case study we will solve four ordinary differential equations, and we will show that the proposed solutions have good accuracy, even we will obtain an exact solution. In the sequel, we will see that the square residual error for the approximate solutions, belongs to the interval [0.001918936920, 0.06334882582], which confirms the accuracy of the proposed methods, taking into account the complexity and difficulty of variational problems.
Ren, K
1990-07-01
A new numerical method of determining potentiometric titration end-points is presented. It consists in calculating the coefficients of approximative spline functions describing the experimental data (e.m.f., volume of titrant added). The end-point (the inflection point of the curve) is determined by calculating zero points of the second derivative of the approximative spline function. This spline function, unlike rational spline functions, is free from oscillations and its course is largely independent of random errors in e.m.f. measurements. The proposed method is useful for direct analysis of titration data and especially as a basis for construction of microcomputer-controlled automatic titrators. PMID:18964999
Approximation methods for control of acoustic/structure models with piezoceramic actuators
NASA Technical Reports Server (NTRS)
Banks, H. T.; Fang, W.; Silcox, R. J.; Smith, R. C.
1991-01-01
The active control of acoustic pressure in a 2-D cavity with a flexible boundary (a beam) is considered. Specifically, this control is implemented via piezoceramic patches on the beam which produces pure bending moments. The incorporation of the feedback control in this manner leads to a system with an unbounded input term. Approximation methods in this manner leads to a system with an unbounded input term. Approximation methods in the context of linear quadratic regulator (LQR) state space control formulation are discussed and numerical results demonstrating the effectiveness of this approach in computing feedback controls for noise reduction are presented.
Laplace transform homotopy perturbation method for the approximation of variational problems.
Filobello-Nino, U; Vazquez-Leal, H; Rashidi, M M; Sedighi, H M; Perez-Sesma, A; Sandoval-Hernandez, M; Sarmiento-Reyes, A; Contreras-Hernandez, A D; Pereyra-Diaz, D; Hoyos-Reyes, C; Jimenez-Fernandez, V M; Huerta-Chua, J; Castro-Gonzalez, F; Laguna-Camacho, J R
2016-01-01
This article proposes the application of Laplace Transform-Homotopy Perturbation Method and some of its modifications in order to find analytical approximate solutions for the linear and nonlinear differential equations which arise from some variational problems. As case study we will solve four ordinary differential equations, and we will show that the proposed solutions have good accuracy, even we will obtain an exact solution. In the sequel, we will see that the square residual error for the approximate solutions, belongs to the interval [0.001918936920, 0.06334882582], which confirms the accuracy of the proposed methods, taking into account the complexity and difficulty of variational problems. PMID:27006884
An Extension of the Krieger-Li-Iafrate Approximation to the Optimized-Effective-Potential Method
Wilson, B.G.
1999-11-11
The Krieger-Li-Iafrate approximation can be expressed as the zeroth order result of an unstable iterative method for solving the integral equation form of the optimized-effective-potential method. By pre-conditioning the iterate a first order correction can be obtained which recovers the bulk of quantal oscillations missing in the zeroth order approximation. A comparison of calculated total energies are given with Krieger-Li-Iafrate, Local Density Functional, and Hyper-Hartree-Fock results for non-relativistic atoms and ions.
Who is this person? A comparison study of current three-dimensional facial approximation methods.
Decker, Summer; Ford, Jonathan; Davy-Jow, Stephanie; Faraut, Philippe; Neville, Wesley; Hilbelink, Don
2013-06-10
Facial approximation is a common tool utilised in forensic human identification. Three-dimensional (3D) imaging technologies allow researchers to go beyond traditional clay models to now create virtual computed models of anatomical structures. The goal of this study was to compare the accuracy of available methods of facial approximation ranging from clay modelling to advanced computer facial approximation techniques. Two computerised reconstructions (FaceIT and FBI's ReFace) and two manual reconstructions (completed by FBI's Neville and Faraut) were completed using a skull from a known individual. A living individual's computed tomography (CT) scan was used to create a virtual 3D model of the skull and soft tissue of the face. The virtual skull models were provided to the computer-based approximation specialists. A rapid prototype of the skull was printed and provided to the practitioners who needed physical specimens. The results from all of the methods (clay and virtual) were compared visually to each other and collectively to the actual features of the living individual to compare the results of each. A quantitative study was also conducted to establish the accuracy of each method and the regions of the face that need the most improvement for all of the specialists. This project demonstrates the wide range of variation between commonly used facial identification methods. The benefit of this study was having a living individual to test the strengths and weaknesses of each method while also providing future areas of focus for soft tissue depth data studies. PMID:23628365
Global collocation methods for approximation and the solution of partial differential equations
NASA Technical Reports Server (NTRS)
Solomonoff, A.; Turkel, E.
1986-01-01
Polynomial interpolation methods are applied both to the approximation of functions and to the numerical solutions of hyperbolic and elliptic partial differential equations. The derivative matrix for a general sequence of the collocation points is constructed. The approximate derivative is then found by a matrix times vector multiply. The effects of several factors on the performance of these methods including the effect of different collocation points are then explored. The resolution of the schemes for both smooth functions and functions with steep gradients or discontinuities in some derivative are also studied. The accuracy when the gradients occur both near the center of the region and in the vicinity of the boundary is investigated. The importance of the aliasing limit on the resolution of the approximation is investigated in detail. Also examined is the effect of boundary treatment on the stability and accuracy of the scheme.
Klett, J D; Sutherland, R A
1992-01-20
Several approximate methods for modeling the electromagnetic (em) scattering properties of nonspherical particles are examined and evaluated. Although some of the approaches are applicable to arbitrary shapes we confine our attention here mainly to spheres and cylinders, for which exact solutions are available for comparisons. Evaluations include comparisons of the computed angular phase function, total extinction efficiency, and backscatter efficiency. Approximate methods investigated include the Rayleigh-Gans (RG) approximation, the Wentzel-Kramers-Brillouin or WKB approximation [and the closely related eikonal approximation (EA)], diffraction theory, and the second-order Shifrin iterative technique. Examples using spheres indicate that for weakly absorbing particles of moderate- to large-size parameters with a real refractive index near unity (i.e., the optically soft case), all models work well in representing the phase function over all scattering angles, with the Shifrin approximation showing the best agreement with the exact solutions. For larger refractive indices, however, the Shifrin approximation breaks down, whereas the WKB method continues to perform relatively well for all scattering angles over a wide range of particle sizes, including those appropriate in both the RG (small particle) and the diffraction (large particle) limits. The relationship between the WKB, eikonal, and anomalous diffraction descriptions of particle extinction is discussed briefly. Backscatter is also discussed in the context of the WKB model, and two modifications to improve the description are included: one to add an internal-reflected internal wave and the other to add a multiplicative scaling factor to preserve the correct backscatter result for strong absorption in the geometric optics limit. A major conclusion of the paper is that the WKB method offers a viable alternative to the more widely used RG and diffraction approximations and is a method that offers significant
An approximate method for solution to variable moment of inertia problems
NASA Technical Reports Server (NTRS)
Beans, E. W.
1981-01-01
An approximation method is presented for reducing a nonlinear differential equation (for the 'weather vaning' motion of a wind turbine) to an equivalent constant moment of inertia problem. The integrated average of the moment of inertia is determined. Cycle time was found to be the equivalent cycle time if the rotating speed is 4 times greater than the system's minimum natural frequency.
Using the Delta Method for Approximate Interval Estimation of Parameter Functions in SEM
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2004-01-01
In applications of structural equation modeling, it is often desirable to obtain measures of uncertainty for special functions of model parameters. This article provides a didactic discussion of how a method widely used in applied statistics can be employed for approximate standard error and confidence interval evaluation of such functions. The…
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Adams, William M., Jr.
1988-01-01
The approximation of unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft are discussed. Two methods of formulating these approximations are extended to include the same flexibility in constraining the approximations and the same methodology in optimizing nonlinear parameters as another currently used extended least-squares method. Optimal selection of nonlinear parameters is made in each of the three methods by use of the same nonlinear, nongradient optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is lower order than that required when no optimization of the nonlinear terms is performed. The free linear parameters are determined using the least-squares matrix techniques of a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from different approaches are described and results are presented that show comparative evaluations from application of each of the extended methods to a numerical example.
NASA Astrophysics Data System (ADS)
Ling, J. F.; Docobo, J. A.; Abad, A. J.
1995-08-01
This article discusses the stellar three-body problem using an approximation in which the outer orbit is assumed to be Keplerian. The equations of motion are integrated by the stroboscopic method, i.e., basically at successive periods of a rapidly changing variable (the eccentric anomaly of the inner orbit). The theory is applied to the triple-star system ξ Ursae Majoris.
Approximate Solution Methods for Spectral Radiative Transfer in High Refractive Index Layers
NASA Technical Reports Server (NTRS)
Siegel, R.; Spuckler, C. M.
1994-01-01
Some ceramic materials for high temperature applications are partially transparent for radiative transfer. The refractive indices of these materials can be substantially greater than one which influences internal radiative emission and reflections. Heat transfer behavior of single and laminated layers has been obtained in the literature by numerical solutions of the radiative transfer equations coupled with heat conduction and heating at the boundaries by convection and radiation. Two-flux and diffusion methods are investigated here to obtain approximate solutions using a simpler formulation than required for exact numerical solutions. Isotropic scattering is included. The two-flux method for a single layer yields excellent results for gray and two band spectral calculations. The diffusion method yields a good approximation for spectral behavior in laminated multiple layers if the overall optical thickness is larger than about ten. A hybrid spectral model is developed using the two-flux method in the optically thin bands, and radiative diffusion in bands that are optically thick.
A numerical method for approximating antenna surfaces defined by discrete surface points
NASA Technical Reports Server (NTRS)
Lee, R. Q.; Acosta, R.
1985-01-01
A simple numerical method for the quadratic approximation of a discretely defined reflector surface is described. The numerical method was applied to interpolate the surface normal of a parabolic reflector surface from a grid of nine closest surface points to the point of incidence. After computing the surface normals, the geometrical optics and the aperture integration method using the discrete Fast Fourier Transform (FFT) were applied to compute the radiaton patterns for a symmetric and an offset antenna configurations. The computed patterns are compared to that of the analytic case and to the patterns generated from another numerical technique using the spline function approximation. In the paper, examples of computations are given. The accuracy of the numerical method is discussed.
Rational approximations from power series of vector-valued meromorphic functions
NASA Technical Reports Server (NTRS)
Sidi, Avram
1992-01-01
Let F(z) be a vector-valued function, F: C yields C(sup N), which is analytic at z = 0 and meromorphic in a neighborhood of z = 0, and let its Maclaurin series be given. In this work we developed vector-valued rational approximation procedures for F(z) by applying vector extrapolation methods to the sequence of partial sums of its Maclaurin series. We analyzed some of the algebraic and analytic properties of the rational approximations thus obtained, and showed that they were akin to Pade approximations. In particular, we proved a Koenig type theorem concerning their poles and a de Montessus type theorem concerning their uniform convergence. We showed how optical approximations to multiple poles and to Laurent expansions about these poles can be constructed. Extensions of the procedures above and the accompanying theoretical results to functions defined in arbitrary linear spaces was also considered. One of the most interesting and immediate applications of the results of this work is to the matrix eigenvalue problem. In a forthcoming paper we exploited the developments of the present work to devise bona fide generalizations of the classical power method that are especially suitable for very large and sparse matrices. These generalizations can be used to approximate simultaneously several of the largest distinct eigenvalues and corresponding eigenvectors and invariant subspaces of arbitrary matrices which may or may not be diagonalizable, and are very closely related with known Krylov subspace methods.
NASA Astrophysics Data System (ADS)
Schuster, Thomas; Schöpfer, Frank
2010-08-01
The method of approximate inverse is a mollification method for stably solving inverse problems. In its original form it has been developed to solve operator equations in L2-spaces and general Hilbert spaces. We show that the method of approximate inverse can be extended to solve linear, ill-posed problems in Banach spaces. This paper is restricted to function spaces. The method itself consists of evaluations of dual pairings of the given data with reconstruction kernels that are associated with mollifiers and the dual of the operator. We first define what we mean by a mollifier in general Banach spaces and then investigate two settings more exactly: the case of Lp-spaces and the case of the Banach space of continuous functions on a compact set. For both settings we present the criteria turning the method of approximate inverse into a regularization method and prove convergence with rates. As an application we refer to x-ray diffractometry which is a technique of non-destructive testing that is concerned with computing the stress tensor of a specimen. Since one knows that the stress tensor is smooth, x-ray diffractometry can appropriately be modelled by a Banach space setting using continuous functions.
NASA Astrophysics Data System (ADS)
Hosen, Md. Alal; Chowdhury, M. S. H.; Ali, Mohammad Yeakub; Ismail, Ahmad Faris
In the present paper, a novel analytical approximation technique has been proposed based on the energy balance method (EBM) to obtain approximate periodic solutions for the focus generalized highly nonlinear oscillators. The expressions of the natural frequency-amplitude relationship are obtained using a novel analytical way. The accuracy of the proposed method is investigated on three benchmark oscillatory problems, namely, the simple relativistic oscillator, the stretched elastic wire oscillator (with a mass attached to its midpoint) and the Duffing-relativistic oscillator. For an initial oscillation amplitude A0 = 100, the maximal relative errors of natural frequency found in three oscillators are 2.1637%, 0.0001% and 1.201%, respectively, which are much lower than the errors found using the existing methods. It is highly remarkable that an excellent accuracy of the approximate natural frequency has been found which is valid for the whole range of large values of oscillation amplitude as compared with the exact ones. Very simple solution procedure and high accuracy that is found in three benchmark problems reveal the novelty, reliability and wider applicability of the proposed analytical approximation technique.
Dunne, Gerald V.; Hur, Jin; Lee, Choonkyu; Min, Hyunsoo
2008-02-15
Our previously developed calculational method (the partial-wave cutoff method) is employed to evaluate explicitly scalar one-loop effective actions in a class of radially symmetric background gauge fields. Our method proves to be particularly effective when it is used in conjunction with a systematic WKB series for the large partial-wave contribution to the effective action. By comparing these numerically exact calculations against the predictions based on the large-mass expansion and derivative expansion, we discuss the validity ranges of the latter approximation methods.
Tejero, E. M.; Gatling, G.
2009-03-15
A method for approximating arbitrary axial magnetic field profiles for a given solenoidal electromagnet coil array is described. The method casts the individual contributions from each coil as a truncated orthonormal basis for the space within the array. This truncated basis allows for the linear decomposition of an arbitrary profile function, which returns the appropriate currents for each coil to best reproduce the desired profile. We present the mathematical details of the method along with a detailed example of its use. The results from the method are used in a simulation and compared with magnetic field measuremen0008.
Burgholzer, Peter; Matt, Gebhard J; Haltmeier, Markus; Paltauf, Günther
2007-04-01
Two universal reconstruction methods for photoacoustic (also called optoacoustic or thermoacoustic) computed tomography are derived, applicable to an arbitrarily shaped detection surface. In photoacoustic tomography acoustic pressure waves are induced by illuminating a semitransparent sample with pulsed electromagnetic radiation and are measured on a detection surface outside the sample. The imaging problem consists in reconstructing the initial pressure sources from those measurements. The first solution to this problem is based on the time reversal of the acoustic pressure field with a second order embedded boundary method. The pressure on the arbitrarily shaped detection surface is set to coincide with the measured data in reversed temporal order. In the second approach the reconstruction problem is solved by calculating the far-field approximation, a concept well known in physics, where the generated acoustic wave is approximated by an outgoing spherical wave with the reconstruction point as center. Numerical simulations are used to compare the proposed universal reconstruction methods with existing algorithms.
Simple and fast cosine approximation method for computer-generated hologram calculation.
Nishitsuji, Takashi; Shimobaba, Tomoyoshi; Kakue, Takashi; Arai, Daisuke; Ito, Tomoyoshi
2015-12-14
The cosine function is a heavy computational operation in computer-generated hologram (CGH) calculation; therefore, it is implemented by substitution methods such as a look-up table. However, the computational load and required memory space of such methods are still large. In this study, we propose a simple and fast cosine function approximation method for CGH calculation. As a result, we succeeded in creating CGH with sufficient quality and made the calculation time 1.6 times as fast at maximum compared to using the look-up table of the cosine function on CPU implementation.
Accumulated approximation: A new method for structural optimization by iterative improvement
NASA Technical Reports Server (NTRS)
Rasmussen, John
1990-01-01
A new method for the solution of non-linear mathematical programming problems in the field of structural optimization is presented. It is an iterative scheme which for each iteration refines the approximation of objective and constraint functions by accumulating the function values of previously visited design points. The method has proven to be competitive for a number of well-known examples of which one is presented here. Furthermore because of the accumulation strategy, the method produces convergence even when the sensitivity analysis is inaccurate.
Finite element approximations for quasi-Newtonian flows employing a multi-field GLS method
NASA Astrophysics Data System (ADS)
Zinani, Flávia; Frey, Sérgio
2011-08-01
This article concerns stabilized finite element approximations for flow-type sensitive fluid flows. A quasi-Newtonian model, based on a kinematic parameter of flow classification and shear and extensional viscosities, is used to represent the fluid behavior from pure shear up to pure extension. The flow governing equations are approximated by a multi-field Galerkin least-squares (GLS) method, in terms of strain rate, pressure and velocity ( D- p- u). This method, which may be viewed as an extension of the formulation for constant viscosity fluids introduced by Behr et al. (Comput Methods Appl Mech 104:31-48, 1993), allows the use of combinations of simple Lagrangian finite element interpolations. Mild Weissenberg flows of quasi-Newtonian fluids—using Carreau viscosities with power-law indexes varying from 0.2 to 2.5—are carried out through a four-to-one planar contraction. The performed physical analysis reveals that the GLS method provides a suitable approximation for the problem and the results are in accordance with the related literature.
An approximate method for calculating three-dimensional inviscid hypersonic flow fields
NASA Technical Reports Server (NTRS)
Riley, Christopher J.; Dejarnette, Fred R.
1990-01-01
An approximate solution technique was developed for 3-D inviscid, hypersonic flows. The method employs Maslen's explicit pressure equation in addition to the assumption of approximate stream surfaces in the shock layer. This approximation represents a simplification to Maslen's asymmetric method. The present method presents a tractable procedure for computing the inviscid flow over 3-D surfaces at angle of attack. The solution procedure involves iteratively changing the shock shape in the subsonic-transonic region until the correct body shape is obtained. Beyond this region, the shock surface is determined using a marching procedure. Results are presented for a spherically blunted cone, paraboloid, and elliptic cone at angle of attack. The calculated surface pressures are compared with experimental data and finite difference solutions of the Euler equations. Shock shapes and profiles of pressure are also examined. Comparisons indicate the method adequately predicts shock layer properties on blunt bodies in hypersonic flow. The speed of the calculations makes the procedure attractive for engineering design applications.
Simulation of borehole induction using the hybrid extended Born approximation and CG-FFHT method
NASA Astrophysics Data System (ADS)
Zhang, Zhong Qing; Liu, Qing Huo
2000-07-01
We propose the hybridization of the extended Born approximation (EBA) with the conjugate-gradient fast Fourier Hankel transform (CG-FFHT) method to improve the efficiency of numerical solution of borehole induction problems in axisymmetric media. First, we use the FFHT to accelerate the EBA as a nonlinear approximation to induction problems, resulting in an algorithm with O(N log2 N) arithmetic operations, where N is the number of unknowns in the problem. This improved EBA is accurate for most formations encountered. Then, for formations with extremely high contrasts, we utilize this improved EBA as a partial preconditioner in the CG-FFHT method to solve the problem accurately with few iterations. The seamless combination of these two approaches provides an automatic way toward the efficient and accurate modeling of induction measurements in axisymmetric media.
Analytical Approximation Method for the Center Manifold in the Nonlinear Output Regulation Problem
NASA Astrophysics Data System (ADS)
Suzuki, Hidetoshi; Sakamoto, Noboru; Celikovský, Sergej
In nonlinear output regulation problems, it is necessary to solve the so-called regulator equations consisting of a partial differential equation and an algebraic equation. It is known that, for the hyperbolic zero dynamics case, solving the regulator equations is equivalent to calculating a center manifold for zero dynamics of the system. The present paper proposes a successive approximation method for obtaining center manifolds and shows its effectiveness by applying it for an inverted pendulum example.
A method for the accurate and smooth approximation of standard thermodynamic functions
NASA Astrophysics Data System (ADS)
Coufal, O.
2013-01-01
A method is proposed for the calculation of approximations of standard thermodynamic functions. The method is consistent with the physical properties of standard thermodynamic functions. This means that the approximation functions are, in contrast to the hitherto used approximations, continuous and smooth in every temperature interval in which no phase transformations take place. The calculation algorithm was implemented by the SmoothSTF program in the C++ language which is part of this paper. Program summaryProgram title:SmoothSTF Catalogue identifier: AENH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3807 No. of bytes in distributed program, including test data, etc.: 131965 Distribution format: tar.gz Programming language: C++. Computer: Any computer with gcc version 4.3.2 compiler. Operating system: Debian GNU Linux 6.0. The program can be run in operating systems in which the gcc compiler can be installed, see http://gcc.gnu.org/install/specific.html. RAM: 256 MB are sufficient for the table of standard thermodynamic functions with 500 lines Classification: 4.9. Nature of problem: Standard thermodynamic functions (STF) of individual substances are given by thermal capacity at constant pressure, entropy and enthalpy. STF are continuous and smooth in every temperature interval in which no phase transformations take place. The temperature dependence of STF as expressed by the table of its values is for further application approximated by temperature functions. In the paper, a method is proposed for calculating approximation functions which, in contrast to the hitherto used approximations, are continuous and smooth in every temperature interval. Solution method: The approximation functions are
NASA Astrophysics Data System (ADS)
Choi, Jun-Ho; Kim, Joong-Soo; Cho, Minhaeng
2005-05-01
Fragment analyses of vibrational circular dichroic response of dipeptides were carried out recently [Choi and Cho, J. Chem. Phys. 120, 4383 (2004)]. In the present paper, by using a minimal size unit peptide containing two chiral carbons covalently bonded to the peptide group, a generalized fragmentation approximation method is discussed and applied to the calculations of infrared-absorption and vibrational circular dichroism (VCD) intensities of amide I vibrations in various secondary structure polypeptides. Unlike the dipole strength determining IR-absorption intensity, the rotational strength is largely determined by the cross terms that are given by the inner product between the transition electric dipole and the transition magnetic dipole of two different peptides. This explains why the signs and magnitudes of VCD peaks are far more sensitive to the relative orientation and distance between different peptide bonds in a given protein. In order to test the validity of fragmentation approximation, three different segments in a globular protein ubiquitin, i.e., right-handed α-helix, β-sheet, and β-turn regions, were chosen for density-functional theory (DFT) calculations of amide I vibrational properties and the numerically simulated IR-absorption and VCD spectra by using the fragmentation method are directly compared with DFT results. It is believed that the fragmentation approximation method will be of use in numerically simulating vibrational spectra of proteins in solutions.
Nikiforov, Alexander; Gamez, Jose A.; Thiel, Walter; Huix-Rotllant, Miquel; Filatov, Michael
2014-09-28
Quantum-chemical computational methods are benchmarked for their ability to describe conical intersections in a series of organic molecules and models of biological chromophores. Reference results for the geometries, relative energies, and branching planes of conical intersections are obtained using ab initio multireference configuration interaction with single and double excitations (MRCISD). They are compared with the results from more approximate methods, namely, the state-interaction state-averaged restricted ensemble-referenced Kohn-Sham method, spin-flip time-dependent density functional theory, and a semiempirical MRCISD approach using an orthogonalization-corrected model. It is demonstrated that these approximate methods reproduce the ab initio reference data very well, with root-mean-square deviations in the optimized geometries of the order of 0.1 Å or less and with reasonable agreement in the computed relative energies. A detailed analysis of the branching plane vectors shows that all currently applied methods yield similar nuclear displacements for escaping the strong non-adiabatic coupling region near the conical intersections. Our comparisons support the use of the tested quantum-chemical methods for modeling the photochemistry of large organic and biological systems.
New identification method for Hammerstein models based on approximate least absolute deviation
NASA Astrophysics Data System (ADS)
Xu, Bao-Chang; Zhang, Ying-Dan
2016-07-01
Disorder and peak noises or large disturbances can deteriorate the identification effects of Hammerstein non-linear models when using the least-square (LS) method. The least absolute deviation technique can be used to resolve this problem; however, its absolute value cannot meet the need of differentiability required by most algorithms. To improve robustness and resolve the non-differentiable problem, an approximate least absolute deviation (ALAD) objective function is established by introducing a deterministic function that exhibits the characteristics of absolute value under certain situations. A new identification method for Hammerstein models based on ALAD is thus developed in this paper. The basic idea of this method is to apply the stochastic approximation theory in the process of deriving the recursive equations. After identifying the parameter matrix of the Hammerstein model via the new algorithm, the product terms in the matrix are separated by calculating the average values. Finally, algorithm convergence is proven by applying the ordinary differential equation method. The proposed algorithm has a better robustness as compared to other LS methods, particularly when abnormal points exist in the measured data. Furthermore, the proposed algorithm is easier to apply and converges faster. The simulation results demonstrate the efficacy of the proposed algorithm.
S-curve networks and an approximate method for estimating degree distributions of complex networks
NASA Astrophysics Data System (ADS)
Guo, Jin-Li
2010-12-01
In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.
NASA Astrophysics Data System (ADS)
Jiao, Jianying; Zhang, Ye
2014-06-01
An inverse method based on local approximate solutions (LAS inverse method) is proposed to invert transient flows in heterogeneous aquifers. Unlike the objective-function-based inversion techniques, the method does not require forward simulations to assess measurement-to-model misfits; thus the knowledge of aquifer initial conditions (IC) and boundary conditions (BC) is not required. Instead, the method employs a set of local approximate solutions of flow to impose continuity of hydraulic head and Darcy fluxes throughout space and time. Given sufficient (but limited) measurements, it yields well-posed systems of nonlinear equations that can be solved efficiently with optimization. Solution of the inversion includes parameters (hydraulic conductivities, specific storage coefficients) and flow field including the unknown IC and BC. Given error-free measurements, the estimated conductivities and specific storages are accurate within 10% of the true values. When increasing measurement errors are imposed, the estimated parameters become less accurate, but the inverse solution is still stable, i.e., parameter, IC, and BC estimation remains bounded. For a problem where parameter variation is unknown, highly parameterized inversion can reveal the underlying parameter structure, whereas equivalent conductivity and average storage coefficient can also be estimated. Because of the physically-based constraints placed in inversion, the number of measurements does not need to exceed the number of parameters for the inverse method to succeed.
The Wentzel-Kramers-Brillouin approximation method applied to the Wigner function
NASA Astrophysics Data System (ADS)
Tosiek, J.; Cordero, R.; Turrubiates, F. J.
2016-06-01
An adaptation of the Wentzel-Kramers-Brilluoin method in the deformation quantization formalism is presented with the aim to obtain an approximate technique of solving the eigenvalue problem for energy in the phase space quantum approach. A relationship between the phase σ ( r →) of a wave function exp (" separators=" /i ħ σ ( r →)) and its respective Wigner function is derived. Formulas to calculate the Wigner function of a product and of a superposition of wave functions are proposed. Properties of a Wigner function of interfering states are also investigated. Examples of this quasi-classical approximation in deformation quantization are analysed. A strict form of the Wigner function for states represented by tempered generalised functions has been derived. Wigner functions of unbound states in the Poeschl-Teller potential have been found.
NASA Astrophysics Data System (ADS)
Chou, Chia-Chun
2015-08-01
The complex quantum Hamilton-Jacobi equation for the complex action is approximately solved by propagating individual Bohmian trajectories in real space. Equations of motion for the complex action and its spatial derivatives are derived through use of the derivative propagation method. We transform these equations into the arbitrary Lagrangian-Eulerian version with the grid velocity matching the flow velocity of the probability fluid. Setting higher-order derivatives equal to zero, we obtain a truncated system of equations of motion describing the rate of change in the complex action and its spatial derivatives transported along approximate Bohmian trajectories. A set of test trajectories is propagated to determine appropriate initial positions for transmitted trajectories. Computational results for transmitted wave packets and transmission probabilities are presented and analyzed for a one-dimensional Eckart barrier and a two-dimensional system involving either a thick or thin Eckart barrier along the reaction coordinate coupled to a harmonic oscillator.
Wu, Fuke; Tian, Tianhai; Rawlings, James B; Yin, George
2016-05-01
The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in the work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766-1793 (1996); ibid. 56, 1794-1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence. PMID:27155630
Wu, Fuke; Tian, Tianhai; Rawlings, James B; Yin, George
2016-05-01
The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in the work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766-1793 (1996); ibid. 56, 1794-1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence.
NASA Astrophysics Data System (ADS)
Wu, Fuke; Tian, Tianhai; Rawlings, James B.; Yin, George
2016-05-01
The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in the work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766-1793 (1996); ibid. 56, 1794-1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence.
Domain decomposition methods for systems of conservation laws: Spectral collocation approximations
NASA Technical Reports Server (NTRS)
Quarteroni, Alfio
1989-01-01
Hyperbolic systems of conversation laws are considered which are discretized in space by spectral collocation methods and advanced in time by finite difference schemes. At any time-level a domain deposition method based on an iteration by subdomain procedure was introduced yielding at each step a sequence of independent subproblems (one for each subdomain) that can be solved simultaneously. The method is set for a general nonlinear problem in several space variables. The convergence analysis, however, is carried out only for a linear one-dimensional system with continuous solutions. A precise form of the error reduction factor at each iteration is derived. Although the method is applied here to the case of spectral collocation approximation only, the idea is fairly general and can be used in a different context as well. For instance, its application to space discretization by finite differences is straight forward.
Approximate method for calculating heating rates on three-dimensional vehicles
NASA Technical Reports Server (NTRS)
Hamilton, H. Harris; Greene, Francis A.; Dejarnette, F. R.
1994-01-01
An approximate method for calculating heating rates on three-dimensional vehicles at angle of attack is presented. The method is based on the axisymmetric analog for three-dimensional boundary layers and uses a generalized body-fitted coordinate system. Edge conditions for the boundary-layer solution are obtained from an inviscid flowfield solution, and because of the coordinate system used, the method is applicable to any blunt body geometry for which an inviscid flowfield solution can be obtained. The method is validated by comparing with experimental heating data and with thin-layer Navier-Stokes calculations on the shuttle orbiter at both wind-tunnel and flight conditions and with thin-layer Navier-Stokes calculations on the HL-20 at wind-tunnel conditions.
NASA Technical Reports Server (NTRS)
Monchick, L.; Green, S.
1977-01-01
Two dimensionality-reducing approximations, the j sub z-conserving coupled states (sometimes called the centrifugal decoupling) method and the effective potential method, were applied to collision calculations of He with CO and with HCl. The coupled states method was found to be sensitive to the interpretation of the centrifugal angular momentum quantum number in the body-fixed frame, but the choice leading to the original McGuire-Kouri expression for the scattering amplitude - and to the simplest formulas - proved to be quite successful in reproducing differential and gas kinetic cross sections. The computationally cheaper effective potential method was much less accurate.
NASA Technical Reports Server (NTRS)
Karpel, M.
1994-01-01
Various control analysis, design, and simulation techniques of aeroservoelastic systems require the equations of motion to be cast in a linear, time-invariant state-space form. In order to account for unsteady aerodynamics, rational function approximations must be obtained to represent them in the first order equations of the state-space formulation. A computer program, MIST, has been developed which determines minimum-state approximations of the coefficient matrices of the unsteady aerodynamic forces. The Minimum-State Method facilitates the design of lower-order control systems, analysis of control system performance, and near real-time simulation of aeroservoelastic phenomena such as the outboard-wing acceleration response to gust velocity. Engineers using this program will be able to calculate minimum-state rational approximations of the generalized unsteady aerodynamic forces. Using the Minimum-State formulation of the state-space equations, they will be able to obtain state-space models with good open-loop characteristics while reducing the number of aerodynamic equations by an order of magnitude more than traditional approaches. These low-order state-space mathematical models are good for design and simulation of aeroservoelastic systems. The computer program, MIST, accepts tabular values of the generalized aerodynamic forces over a set of reduced frequencies. It then determines approximations to these tabular data in the LaPlace domain using rational functions. MIST provides the capability to select the denominator coefficients in the rational approximations, to selectably constrain the approximations without increasing the problem size, and to determine and emphasize critical frequency ranges in determining the approximations. MIST has been written to allow two types data weighting options. The first weighting is a traditional normalization of the aerodynamic data to the maximum unit value of each aerodynamic coefficient. The second allows weighting the
Shu, Yu-Chen; Chern, I-Liang; Chang, Chien C.
2014-10-15
Most elliptic interface solvers become complicated for complex interface problems at those “exceptional points” where there are not enough neighboring interior points for high order interpolation. Such complication increases especially in three dimensions. Usually, the solvers are thus reduced to low order accuracy. In this paper, we classify these exceptional points and propose two recipes to maintain order of accuracy there, aiming at improving the previous coupling interface method [26]. Yet the idea is also applicable to other interface solvers. The main idea is to have at least first order approximations for second order derivatives at those exceptional points. Recipe 1 is to use the finite difference approximation for the second order derivatives at a nearby interior grid point, whenever this is possible. Recipe 2 is to flip domain signatures and introduce a ghost state so that a second-order method can be applied. This ghost state is a smooth extension of the solution at the exceptional point from the other side of the interface. The original state is recovered by a post-processing using nearby states and jump conditions. The choice of recipes is determined by a classification scheme of the exceptional points. The method renders the solution and its gradient uniformly second-order accurate in the entire computed domain. Numerical examples are provided to illustrate the second order accuracy of the presently proposed method in approximating the gradients of the original states for some complex interfaces which we had tested previous in two and three dimensions, and a real molecule ( (1D63)) which is double-helix shape and composed of hundreds of atoms.
NASA Astrophysics Data System (ADS)
Anda, E.; Chiappe, G.; Busser, C.; Davidovich, M.; Martins, G.; H-Meisner, F.; Dagotto, E.
2008-03-01
A numerical algorithm to study transport properties of highly correlated local structures is proposed. The method, dubbed the Logarithmic Discretization Embedded Cluster Approximation (LDECA), consists of diagonalizing a finite cluster containing the many-body terms of the Hamiltonian and embedding it into the rest of the system, combined with Wilson's ideas of a logarithmic discretization of the representation of the Hamiltonian. LDECA's rapid convergence eliminates finite-size effects commonly present in the embedding cluster approximation (ECA) method. The physics associated with both one embedded dot and a string of two dots side-coupled to leads is discussed. In the former case, our results accurately agree with Bethe ansatz (BA) data, while in the latter, the results are framed in the conceptual background of a two-stage Kondo problem. A diagrammatic expansion provides the theoretical foundation for the method. It is argued that LDECA allows for the study of complex problems that are beyond the reach of currently available numerical methods.
NASA Astrophysics Data System (ADS)
Liu, Jie; Sun, Xingsheng; Han, Xu; Jiang, Chao; Yu, Dejie
2015-05-01
Based on the Gegenbauer polynomial expansion theory and regularization method, an analytical method is proposed to identify dynamic loads acting on stochastic structures. Dynamic loads are expressed as functions of time and random parameters in time domain and the forward model of dynamic load identification is established through the discretized convolution integral of loads and the corresponding unit-pulse response functions of system. Random parameters are approximated through the random variables with λ-probability density function (PDFs) or their derivative PDFs. For this kind of random variables, Gegenbauer polynomial expansion is the unique correct choice to transform the problem of load identification for a stochastic structure into its equivalent deterministic system. Just via its equivalent deterministic system, the load identification problem of a stochastic structure can be solved by any available deterministic methods. With measured responses containing noise, the improved regularization operator is adopted to overcome the ill-posedness of load reconstruction and to obtain the stable and approximate solutions of certain inverse problems and the valid assessments of the statistics of identified loads. Numerical simulations demonstrate that with regard to stochastic structures, the identification and assessment of dynamic loads are achieved steadily and effectively by the presented method.
James, Kevin R; Dowling, David R
2008-09-01
In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.
An approximate method for calculating heating rates on three-dimensional vehicles
NASA Technical Reports Server (NTRS)
Hamilton, H. H., II; Greene, Francis A.; Dejarnette, Fred R.
1993-01-01
An approximate method for calculating heating rates on three-dimensional vehicles at angle of attack is presented. The method is based on the axisymmetric analog for three-dimensional boundary layers and uses a generalized body fitted coordinate system. Edge conditions for the boundary layer solution are obtained from an inviscid flowfield solution, and because of the coordinate system used the method is applicable to any blunt body geometry for which a inviscid flowfield solution can be obtained. It is validated by comparing with experimental heating data and with Navier-Stokes calculations on the Shuttle orbiter at both wind tunnel and flight conditions and with Navier-Stokes calculations on the HL-20 at wind tunnel conditions.
NASA Astrophysics Data System (ADS)
Nikšić, T.; Kralj, N.; Tutiš, T.; Vretenar, D.; Ring, P.
2013-10-01
A new implementation of the finite amplitude method (FAM) for the solution of the relativistic quasiparticle random-phase approximation (RQRPA) is presented, based on the relativistic Hartree-Bogoliubov (RHB) model for deformed nuclei. The numerical accuracy and stability of the FAM-RQRPA is tested in a calculation of the monopole response of 22O. As an illustrative example, the model is applied to a study of the evolution of monopole strength in the chain of Sm isotopes, including the splitting of the giant monopole resonance in axially deformed systems.
NASA Astrophysics Data System (ADS)
Bieg, Bohdan; Chrzanowski, Janusz; Kravtsov, Yury A.; Orsitto, Francesco
Basic principles and recent findings of quasi-isotropic approximation (QIA) of a geometrical optics method are presented in a compact manner. QIA was developed in 1969 to describe electromagnetic waves in weakly anisotropic media. QIA represents the wave field as a power series in two small parameters, one of which is a traditional geometrical optics parameter, equal to wavelength ratio to plasma characteristic scale, and the other one is the largest component of anisotropy tensor. As a result, "" QIA ideally suits to tokamak polarimetry/interferometry systems in submillimeter range, where plasma manifests properties of weakly anisotropic medium.
NASA Astrophysics Data System (ADS)
Walker, David M.; Allingham, David; Lee, Heung Wing Joseph; Small, Michael
2010-02-01
Small world network models have been effective in capturing the variable behaviour of reported case data of the SARS coronavirus outbreak in Hong Kong during 2003. Simulations of these models have previously been realized using informed “guesses” of the proposed model parameters and tested for consistency with the reported data by surrogate analysis. In this paper we attempt to provide statistically rigorous parameter distributions using Approximate Bayesian Computation sampling methods. We find that such sampling schemes are a useful framework for fitting parameters of stochastic small world network models where simulation of the system is straightforward but expressing a likelihood is cumbersome.
Approximate method for calculating free vibrations of a large-wind-turbine tower structure
NASA Technical Reports Server (NTRS)
Das, S. C.; Linscott, B. S.
1977-01-01
A set of ordinary differential equations were derived for a simplified structural dynamic lumped-mass model of a typical large-wind-turbine tower structure. Dunkerley's equation was used to arrive at a solution for the fundamental natural frequencies of the tower in bending and torsion. The ERDA-NASA 100-kW wind turbine tower structure was modeled, and the fundamental frequencies were determined by the simplified method described. The approximate fundamental natural frequencies for the tower agree within 18 percent with test data and predictions analyzed.
Ghosh, Debashree
2014-03-07
Hybrid quantum mechanics/molecular mechanics (QM/MM) methods provide an attractive way to closely retain the accuracy of the QM method with the favorable computational scaling of the MM method. Therefore, it is not surprising that QM/MM methods are being increasingly used for large chemical/biological systems. Hybrid equation of motion coupled cluster singles doubles/effective fragment potential (EOM-CCSD/EFP) methods have been developed over the last few years to understand the effect of solvents and other condensed phases on the electronic spectra of chromophores. However, the computational cost of this approach is still dominated by the steep scaling of the EOM-CCSD method. In this work, we propose and implement perturbative approximations to the EOM-CCSD method in this hybrid scheme to reduce the cost of EOM-CCSD/EFP. The timings and accuracy of this hybrid approach is tested for calculation of ionization energies, excitation energies, and electron affinities of microsolvated nucleic acid bases (thymine and cytosine), phenol, and phenolate.
Fine Mapping Causal Variants with an Approximate Bayesian Method Using Marginal Test Statistics.
Chen, Wenan; Larrabee, Beth R; Ovsyannikova, Inna G; Kennedy, Richard B; Haralambieva, Iana H; Poland, Gregory A; Schaid, Daniel J
2015-07-01
Two recently developed fine-mapping methods, CAVIAR and PAINTOR, demonstrate better performance over other fine-mapping methods. They also have the advantage of using only the marginal test statistics and the correlation among SNPs. Both methods leverage the fact that the marginal test statistics asymptotically follow a multivariate normal distribution and are likelihood based. However, their relationship with Bayesian fine mapping, such as BIMBAM, is not clear. In this study, we first show that CAVIAR and BIMBAM are actually approximately equivalent to each other. This leads to a fine-mapping method using marginal test statistics in the Bayesian framework, which we call CAVIAR Bayes factor (CAVIARBF). Another advantage of the Bayesian framework is that it can answer both association and fine-mapping questions. We also used simulations to compare CAVIARBF with other methods under different numbers of causal variants. The results showed that both CAVIARBF and BIMBAM have better performance than PAINTOR and other methods. Compared to BIMBAM, CAVIARBF has the advantage of using only marginal test statistics and takes about one-quarter to one-fifth of the running time. We applied different methods on two independent cohorts of the same phenotype. Results showed that CAVIARBF, BIMBAM, and PAINTOR selected the same top 3 SNPs; however, CAVIARBF and BIMBAM had better consistency in selecting the top 10 ranked SNPs between the two cohorts. Software is available at https://bitbucket.org/Wenan/caviarbf.
Ghosh, Debashree
2014-03-01
Hybrid quantum mechanics/molecular mechanics (QM/MM) methods provide an attractive way to closely retain the accuracy of the QM method with the favorable computational scaling of the MM method. Therefore, it is not surprising that QM/MM methods are being increasingly used for large chemical/biological systems. Hybrid equation of motion coupled cluster singles doubles/effective fragment potential (EOM-CCSD/EFP) methods have been developed over the last few years to understand the effect of solvents and other condensed phases on the electronic spectra of chromophores. However, the computational cost of this approach is still dominated by the steep scaling of the EOM-CCSD method. In this work, we propose and implement perturbative approximations to the EOM-CCSD method in this hybrid scheme to reduce the cost of EOM-CCSD/EFP. The timings and accuracy of this hybrid approach is tested for calculation of ionization energies, excitation energies, and electron affinities of microsolvated nucleic acid bases (thymine and cytosine), phenol, and phenolate. PMID:24606347
NASA Astrophysics Data System (ADS)
Anda, E. V.; Chiappe, G.; Büsser, C. A.; Davidovich, M. A.; Martins, G. B.; Heidrich-Meisner, F.; Dagotto, E.
2008-08-01
This work proposes an approach to study transport properties of highly correlated local structures. The method, dubbed the logarithmic discretization embedded cluster approximation (LDECA), consists of diagonalizing a finite cluster containing the many-body terms of the Hamiltonian and embedding it into the rest of the system, combined with Wilson’s idea of a logarithmic discretization of the representation of the Hamiltonian. The physics associated with both one embedded dot and a double-dot side coupled to leads is discussed in detail. In the former case, the results perfectly agree with Bethe ansatz data, while in the latter, the physics obtained is framed in the conceptual background of a two-stage Kondo problem. A many-body formalism provides a solid theoretical foundation to the method. We argue that LDECA is well suited to study complicated problems such as transport through molecules or quantum dot structures with complex ground states.
Communication: An efficient analytic gradient theory for approximate spin projection methods
NASA Astrophysics Data System (ADS)
Hratchian, Hrant P.
2013-03-01
Spin polarized and broken symmetry density functional theory are popular approaches for treating the electronic structure of open shell systems. However, spin contamination can significantly affect the quality of predicted geometries and properties. One scheme for addressing this concern in studies involving broken-symmetry states is the approximate projection method developed by Yamaguchi and co-workers. Critical to the exploration of potential energy surfaces and the study of properties using this method will be an efficient analytic gradient theory. This communication introduces such a theory formulated, for the first time, within the framework of general post-self consistent field (SCF) derivative theory. Importantly, the approach taken here avoids the need to explicitly solve for molecular orbital derivatives of each nuclear displacement perturbation, as has been used in a recent implementation. Instead, the well-known z-vector scheme is employed and only one SCF response equation is required.
NASA Astrophysics Data System (ADS)
Zhang, Ji; Ding, Mingyue; Yuchi, Ming; Hou, Wenguang; Ye, Huashan; Qiu, Wu
2010-03-01
Factor analysis is an efficient technique to the analysis of dynamic structures in medical image sequences and recently has been used in contrast-enhanced ultrasound (CEUS) of hepatic perfusion. Time-intensity curves (TICs) extracted by factor analysis can provide much more diagnostic information for radiologists and improve the diagnostic rate of focal liver lesions (FLLs). However, one of the major drawbacks of factor analysis of dynamic structures (FADS) is nonuniqueness of the result when only the non-negativity criterion is used. In this paper, we propose a new method of replace-approximation based on apex-seeking for ambiguous FADS solutions. Due to a partial overlap of different structures, factor curves are assumed to be approximately replaced by the curves existing in medical image sequences. Therefore, how to find optimal curves is the key point of the technique. No matter how many structures are assumed, our method always starts to seek apexes from one-dimensional space where the original high-dimensional data is mapped. By finding two stable apexes from one dimensional space, the method can ascertain the third one. The process can be continued until all structures are found. This technique were tested on two phantoms of blood perfusion and compared to the two variants of apex-seeking method. The results showed that the technique outperformed two variants in comparison of region of interest measurements from phantom data. It can be applied to the estimation of TICs derived from CEUS images and separation of different physiological regions in hepatic perfusion.
NASA Astrophysics Data System (ADS)
Sato, Takeshi; Nakai, Hiromi
2009-12-01
A new method to calculate the atom-atom dispersion coefficients in a molecule is proposed for the use in density functional theory with dispersion (DFT-D) correction. The method is based on the local response approximation due to Dobson and Dinte [Phys. Rev. Lett. 76, 1780 (1996)], with modified dielectric model recently proposed by Vydrov and van Voorhis [J. Chem. Phys. 130, 104105 (2009)]. The local response model is used to calculate the distributed multipole polarizabilities of atoms in a molecule, from which the dispersion coefficients are obtained by an explicit frequency integral of the Casimir-Polder type. Thus obtained atomic polarizabilities are also used in the damping function for the short-range singularity. Unlike empirical DFT-D methods, the local response dispersion (LRD) method is able to calculate the dispersion energy from the ground-state electron density only. It is applicable to any geometry, free from physical constants such as van der Waals radii or atomic polarizabilities, and computationally very efficient. The LRD method combined with the long-range corrected DFT functional (LC-BOP) is applied to calculations of S22 weakly bound complex set [Phys. Chem. Chem. Phys. 8, 1985 (2006)]. Binding energies obtained by the LC-BOP + LRD agree remarkably well with ab initio references.
NASA Astrophysics Data System (ADS)
Kaporin, I. E.
2012-02-01
In order to precondition a sparse symmetric positive definite matrix, its approximate inverse is examined, which is represented as the product of two sparse mutually adjoint triangular matrices. In this way, the solution of the corresponding system of linear algebraic equations (SLAE) by applying the preconditioned conjugate gradient method (CGM) is reduced to performing only elementary vector operations and calculating sparse matrix-vector products. A method for constructing the above preconditioner is described and analyzed. The triangular factor has a fixed sparsity pattern and is optimal in the sense that the preconditioned matrix has a minimum K-condition number. The use of polynomial preconditioning based on Chebyshev polynomials makes it possible to considerably reduce the amount of scalar product operations (at the cost of an insignificant increase in the total number of arithmetic operations). The possibility of an efficient massively parallel implementation of the resulting method for solving SLAEs is discussed. For a sequential version of this method, the results obtained by solving 56 test problems from the Florida sparse matrix collection (which are large-scale and ill-conditioned) are presented. These results show that the method is highly reliable and has low computational costs.
An improved approximate-Bayesian model-choice method for estimating shared evolutionary history
2014-01-01
Background To understand biological diversification, it is important to account for large-scale processes that affect the evolutionary history of groups of co-distributed populations of organisms. Such events predict temporally clustered divergences times, a pattern that can be estimated using genetic data from co-distributed species. I introduce a new approximate-Bayesian method for comparative phylogeographical model-choice that estimates the temporal distribution of divergences across taxa from multi-locus DNA sequence data. The model is an extension of that implemented in msBayes. Results By reparameterizing the model, introducing more flexible priors on demographic and divergence-time parameters, and implementing a non-parametric Dirichlet-process prior over divergence models, I improved the robustness, accuracy, and power of the method for estimating shared evolutionary history across taxa. Conclusions The results demonstrate the improved performance of the new method is due to (1) more appropriate priors on divergence-time and demographic parameters that avoid prohibitively small marginal likelihoods for models with more divergence events, and (2) the Dirichlet-process providing a flexible prior on divergence histories that does not strongly disfavor models with intermediate numbers of divergence events. The new method yields more robust estimates of posterior uncertainty, and thus greatly reduces the tendency to incorrectly estimate models of shared evolutionary history with strong support. PMID:24992937
Approximate analytic method for high-apogee twelve-hour orbits of artificial Earth's satellites
NASA Astrophysics Data System (ADS)
Vashkovyaka, M. A.; Zaslavskii, G. S.
2016-09-01
We propose an approach to the study of the evolution of high-apogee twelve-hour orbits of artificial Earth's satellites. We describe parameters of the motion model used for the artificial Earth's satellite such that the principal gravitational perturbations of the Moon and Sun, nonsphericity of the Earth, and perturbations from the light pressure force are approximately taken into account. To solve the system of averaged equations describing the evolution of the orbit parameters of an artificial satellite, we use both numeric and analytic methods. To select initial parameters of the twelve-hour orbit, we assume that the path of the satellite along the surface of the Earth is stable. Results obtained by the analytic method and by the numerical integration of the evolving system are compared. For intervals of several years, we obtain estimates of oscillation periods and amplitudes for orbital elements. To verify the results and estimate the precision of the method, we use the numerical integration of rigorous (not averaged) equations of motion of the artificial satellite: they take into account forces acting on the satellite substantially more completely and precisely. The described method can be applied not only to the investigation of orbit evolutions of artificial satellites of the Earth; it can be applied to the investigation of the orbit evolution for other planets of the Solar system provided that the corresponding research problem will arise in the future and the considered special class of resonance orbits of satellites will be used for that purpose.
Coherent-potential approximation in the tight-binding linear muffin-tin orbital method
NASA Astrophysics Data System (ADS)
Singh, Prabhakar P.; Gonis, A.
1993-07-01
We describe a consistent approach for applying the coherent-potential approximation (CPA) to the various representations of the linear muffin-tin orbital method. Unlike the previous works of Kudrnovský et al. [Phys. Rev. B 35, 2487 (1987); 41, 7515 (1990)], our results for the ensemble-averaged Green functions in the tight-binding representation yield E- and r-dependent quantities that are consistent with the traditional applications of the single-site CPA. To illustrate the reliability and the usefulness of our approach we compare the nonspherically averaged charge densities, calculated in real space, of ordered NiPt in L10 structure and the substitutionally disordered Ni0.5Pt0.5 on a face-centered-cubic lattice.
Relaxation and approximate factorization methods for the unsteady full potential equation
NASA Technical Reports Server (NTRS)
Shankar, V.; Ide, H.; Gorski, J.
1984-01-01
The unsteady form of the full potential equation is solved in conservation form, using implicit methods based on approximate factorization and relaxation schemes. A local time linearization for density is introduced to enable solution to the equation in terms of phi, the velocity potential. A novel flux-biasing technique is applied to generate proper forms of the artificial viscosity, to treat hyperbolic regions with shocks and sonic lines present. The wake is properly modeled by accounting not only for jumps in phi, but also for jumps in higher derivatives of phi obtained from requirements of density continuity. The far field is modeled using the Riemann invariants to simulate nonreflecting boundary conditions. Results are presented for flows over airfoils, cylinders, and spheres. Comparisons are made with available Euler and full potential results.
Multi-scale crystal growth computations via an approximate block Newton method
NASA Astrophysics Data System (ADS)
Yeckel, Andrew; Lun, Lisa; Derby, Jeffrey J.
2010-04-01
Multi-scale and multi-physics simulations, such as the computational modeling of crystal growth processes, will benefit from the modular coupling of existing codes rather than the development of monolithic, single-application software. An effective coupling approach, the approximate block Newton approach (ABN), is developed and applied to the steady-state computation of crystal growth in an electrodynamic gradient freeze system. Specifically, the code CrysMAS is employed for furnace-scale heat transfer computations and is coupled with the code Cats2D to calculate melt fluid dynamics and phase-change phenomena. The ABN coupling strategy proves to be vastly more reliable and cost efficient than simpler coupling methods for this problem and is a promising approach for future crystal growth models.
An approximate method for analyzing transient condensation on spray in HYLIFE-II
Bai, R.Y.; Schrock, V.E. . Dept. of Nuclear Engineering)
1990-01-01
The HYLIFE-II conceptual design calls for analysis of highly transient condensation on droplets to achieve a rapidly decaying pressure field. Drops exposed to the required transient vapor pressure field are first heated by condensation but later begin to reevaporate after the vapor temperature falls below the drop surface temperature. An approximate method of analysis has been developed based on the assumption that the thermal resistance is concentrated in the liquid. The time dependent boundary condition is treated via the Duhamel integral for the pure conduction model. The resulting Nusselt number is enhanced to account for convection within the drop and then used to predict the drop mean temperature history. Many histories are considered to determine the spray rate necessary to achieve the required complete condensation.
2013-01-01
Background Genomic selection is an effective tool for animal and plant breeding, allowing effective individual selection without phenotypic records through the prediction of genomic breeding value (GBV). To date, genomic selection has focused on a single trait. However, actual breeding often targets multiple correlated traits, and, therefore, joint analysis taking into consideration the correlation between traits, which might result in more accurate GBV prediction than analyzing each trait separately, is suitable for multi-trait genomic selection. This would require an extension of the prediction model for single-trait GBV to multi-trait case. As the computational burden of multi-trait analysis is even higher than that of single-trait analysis, an effective computational method for constructing a multi-trait prediction model is also needed. Results We described a Bayesian regression model incorporating variable selection for jointly predicting GBVs of multiple traits and devised both an MCMC iteration and variational approximation for Bayesian estimation of parameters in this multi-trait model. The proposed Bayesian procedures with MCMC iteration and variational approximation were referred to as MCBayes and varBayes, respectively. Using simulated datasets of SNP genotypes and phenotypes for three traits with high and low heritabilities, we compared the accuracy in predicting GBVs between multi-trait and single-trait analyses as well as between MCBayes and varBayes. The results showed that, compared to single-trait analysis, multi-trait analysis enabled much more accurate GBV prediction for low-heritability traits correlated with high-heritability traits, by utilizing the correlation structure between traits, while the prediction accuracy for uncorrelated low-heritability traits was comparable or less with multi-trait analysis in comparison with single-trait analysis depending on the setting for prior probability that a SNP has zero effect. Although the prediction
NASA Astrophysics Data System (ADS)
Hartikainen, Markus E.; Ojalehto, Vesa; Sahlstedt, Kristian
2015-03-01
Using an interactive multiobjective optimization method called NIMBUS and an approximation method called PAINT, preferable solutions to a five-objective problem of operating a wastewater treatment plant are found. The decision maker giving preference information is an expert in wastewater treatment plant design at the engineering company Pöyry Finland Ltd. The wastewater treatment problem is computationally expensive and requires running a simulator to evaluate the values of the objective functions. This often leads to problems with interactive methods as the decision maker may get frustrated while waiting for new solutions to be computed. Thus, a newly developed PAINT method is used to speed up the iterations of the NIMBUS method. The PAINT method interpolates between a given set of Pareto optimal outcomes and constructs a computationally inexpensive mixed integer linear surrogate problem for the original wastewater treatment problem. With the mixed integer surrogate problem, the time required from the decision maker is comparatively short. In addition, a new IND-NIMBUS® PAINT module is developed to allow the smooth interoperability of the NIMBUS method and the PAINT method.
NASA Astrophysics Data System (ADS)
Wu, Kun; Zhang, Feng; Min, Jinzhong; Yu, Qiu-Run; Wang, Xin-Yue; Ma, Leiming
2016-09-01
The adding method, which could calculate the infrared radiative transfer (IRT) in inhomogeneous atmosphere with multiple layers, has been applied to δ -four-stream discrete-ordinates method (DOM). This scheme is referred as δ -4DDA. However, there is a lack of application for adding method of δ -four-stream spherical harmonic expansion approximation (SHM) to solve infrared radiative transfer through multiple layers. In this paper, the adding method for δ -four-stream SHM (δ -4SDA) will be obtained and the accuracy of it will be evaluated as well. The result of δ -4SDA in an idealized medium with homogeneous optical property is significantly more accurate than that of the adding method for δ -two-stream DOM (δ -2DDA). The relative errors of δ -2DDA can be over 15% in thin optical depths for downward emissivity, while errors of δ -4SDA are bounded by 2%. However, the result of δ -4SDA is slightly less accurate than that of δ -4DDA. In a radiation model with realistic atmospheric profile considering gaseous transmission, the accuracy for heating rate of δ -4SDA is significantly superior than that of δ -2DDA, especially for the cloudy sky. The accuracy for heating rate of δ -4SDA is slightly less accurate than that of δ -4DDA under water cloud conditions, while it is superior than that of δ -4DDA in ice cloud cases. Beside, the computational efficiency of δ -4SDA is higher than that of δ -4DDA.
Rational trigonometric approximations using Fourier series partial sums
NASA Technical Reports Server (NTRS)
Geer, James F.
1993-01-01
A class of approximations (S(sub N,M)) to a periodic function f which uses the ideas of Pade, or rational function, approximations based on the Fourier series representation of f, rather than on the Taylor series representation of f, is introduced and studied. Each approximation S(sub N,M) is the quotient of a trigonometric polynomial of degree N and a trigonometric polynomial of degree M. The coefficients in these polynomials are determined by requiring that an appropriate number of the Fourier coefficients of S(sub N,M) agree with those of f. Explicit expressions are derived for these coefficients in terms of the Fourier coefficients of f. It is proven that these 'Fourier-Pade' approximations converge point-wise to (f(x(exp +))+f(x(exp -)))/2 more rapidly (in some cases by a factor of 1/k(exp 2M)) than the Fourier series partial sums on which they are based. The approximations are illustrated by several examples and an application to the solution of an initial, boundary value problem for the simple heat equation is presented.
NASA Astrophysics Data System (ADS)
Lotov, A. V.; Maiskaya, T. S.
2012-01-01
For multicriteria convex optimization problems, new nonadaptive methods are proposed for polyhedral approximation of the multidimensional Edgeworth-Pareto hull (EPH), which is a maximal set having the same Pareto frontier as the set of feasible criteria vectors. The methods are based on evaluating the support function of the EPH for a collection of directions generated by a suboptimal covering on the unit sphere. Such directions are constructed in advance by applying an asymptotically effective adaptive method for the polyhedral approximation of convex compact bodies, namely, by the estimate refinement method. Due to the a priori definition of the directions, the proposed EPH approximation procedure can easily be implemented with parallel computations. Moreover, the use of nonadaptive methods considerably simplifies the organization of EPH approximation on the Internet. Experiments with an applied problem (from 3 to 5 criteria) showed that the methods are fairly similar in characteristics to adaptive methods. Therefore, they can be used in parallel computations and on the Internet.
Albrecht, Andreas A; Day, Luke; Abdelhadi Ep Souki, Ouala; Steinhöfel, Kathleen
2016-02-01
The analysis of energy landscapes plays an important role in mathematical modelling, simulation and optimisation. Among the main features of interest are the number and distribution of local minima within the energy landscape. Granier and Kallel proposed in 2002 a new sampling procedure for estimating the number of local minima. In the present paper, we focus on improved heuristic implementations of the general framework devised by Granier and Kallel with regard to run-time behaviour and accuracy of predictions. The new heuristic method is demonstrated for the case of partial energy landscapes induced by RNA secondary structures. While the computation of minimum free energy RNA secondary structures has been studied for a long time, the analysis of folding landscapes has gained momentum over the past years in the context of co-transcriptional folding and deeper insights into cell processes. The new approach has been applied to ten RNA instances of length between 99 nt and 504 nt and their respective partial energy landscapes defined by secondary structures within an energy offset ΔE above the minimum free energy conformation. The number of local minima within the partial energy landscapes ranges from 1440 to 3441. Our heuristic method produces for the best approximations on average a deviation below 3.0% from the true number of local minima.
Improved locality-sensitive hashing method for the approximate nearest neighbor problem
NASA Astrophysics Data System (ADS)
Lu, Ying-Hua; Ma, Ting-Huai; Zhong, Shui-Ming; Cao, Jie; Wang, Xin; Abdullah, Al-Dhelaan
2014-08-01
In recent years, the nearest neighbor search (NNS) problem has been widely used in various interesting applications. Locality-sensitive hashing (LSH), a popular algorithm for the approximate nearest neighbor problem, is proved to be an efficient method to solve the NNS problem in the high-dimensional and large-scale databases. Based on the scheme of p-stable LSH, this paper introduces a novel improvement algorithm called randomness-based locality-sensitive hashing (RLSH) based on p-stable LSH. Our proposed algorithm modifies the query strategy that it randomly selects a certain hash table to project the query point instead of mapping the query point into all hash tables in the period of the nearest neighbor query and reconstructs the candidate points for finding the nearest neighbors. This improvement strategy ensures that RLSH spends less time searching for the nearest neighbors than the p-stable LSH algorithm to keep a high recall. Besides, this strategy is proved to promote the diversity of the candidate points even with fewer hash tables. Experiments are executed on the synthetic dataset and open dataset. The results show that our method can cost less time consumption and less space requirements than the p-stable LSH while balancing the same recall.
NASA Astrophysics Data System (ADS)
Miura, Shinichi; Okazaki, Susumu
2001-09-01
In this paper, the path integral molecular dynamics (PIMD) method has been extended to employ an efficient approximation of the path action referred to as the pair density matrix approximation. Configurations of the isomorphic classical systems were dynamically sampled by introducing fictitious momenta as in the PIMD based on the standard primitive approximation. The indistinguishability of the particles was handled by a pseudopotential of particle permutation that is an extension of our previous one [J. Chem. Phys. 112, 10 116 (2000)]. As a test of our methodology for Boltzmann statistics, calculations have been performed for liquid helium-4 at 4 K. We found that the PIMD with the pair density matrix approximation dramatically reduced the computational cost to obtain the structural as well as dynamical (using the centroid molecular dynamics approximation) properties at the same level of accuracy as that with the primitive approximation. With respect to the identical particles, we performed the calculation of a bosonic triatomic cluster. Unlike the primitive approximation, the pseudopotential scheme based on the pair density matrix approximation described well the bosonic correlation among the interacting atoms. Convergence with a small number of discretization of the path achieved by this approximation enables us to construct a method of avoiding the problem of the vanishing pseudopotential encountered in the calculations by the primitive approximation.
Discretization and approximation methods for reinforcement learning of highly reconfigurable systems
NASA Astrophysics Data System (ADS)
Lampton, Amanda Kathryn
There are a number of techniques that are used to solve reinforcement learning problems, but very few that have been developed for and tested on highly reconfigurable systems cast as reinforcement learning problems. Reconfigurable systems refers to a vehicle (air, ground, or water) or collection of vehicles that can change its geometrical features, i.e. shape or formation, to perform tasks that the vehicle could not otherwise accomplish. These systems tend to be optimized for several operating conditions, and then controllers are designed to reconfigure the system from one operating condition to another. Q-learning, an unsupervised episodic learning technique that solves the reinforcement learning problem, is an attractive control methodology for reconfigurable systems. It has been successfully applied to a myriad of control problems, and there are a number of variations that were developed to avoid or alleviate some limitations in earlier version of this approach. This dissertation describes the development of three modular enhancements to the Q-learning algorithm that solve some of the unique problems that arise when working with this class of systems, such as the complex interaction of reconfigurable parameters and computationally intensive models of the systems. A multi-resolution state-space discretization method is developed that adaptively rediscretizes the state-space by progressively finer grids around one or more distinct Regions Of Interest within the state or learning space. A genetic algorithm that autonomously selects the basis functions to be used in the approximation of the action-value function is applied periodically throughout the learning process. Policy comparison is added to monitor the state of the policy encoded in the action-value function to prevent unnecessary episodes at each level of discretization. This approach is validated on several problems including an inverted pendulum, reconfigurable airfoil, and reconfigurable wing. Results
A method to approximate the inverse of a part of the additive relationship matrix.
Faux, P; Gengler, N
2015-06-01
Single-step genomic predictions need the inverse of the part of the additive relationship matrix between genotyped animals (A22 ). Gains in computing time are feasible with an algorithm that sets up the sparsity pattern of A22-1 (SP algorithm) using pedigree searches, when A22-1 is close to sparse. The objective of this study is to present a modification of the SP algorithm (RSP algorithm) and to assess its use in approximating A22-1 when the actual A22-1 is dense. The RSP algorithm sets up a restricted sparsity pattern of A22-1 by limiting the pedigree search to a maximum number of searched branches. We have tested its use on four different simulated genotyped populations, from 10 000 to 75 000 genotyped animals. Accuracy of approximation is tested by replacing the actual A22-1 by its approximation in an equivalent mixed model including only genotyped animals. Results show that limiting the pedigree search to four branches is enough to provide accurate approximations of A22-1, which contain approximately 80% of zeros. Computing approximations is not expensive in time but may require a great amount of memory (at maximum, approximately 81 min and approximately 55 Gb of RAM for 75 000 genotyped animals using parallel processing on four threads). PMID:25560252
NASA Astrophysics Data System (ADS)
Kravtsov, Yu. A.; Bieg, B.; Bliokh, K. Yu.; Hirsch, M.
2008-03-01
Three different theoretical approaches are presented: quasi-isotropic approximation (QIA), Stokes vector formalism and complex polarization angle method, which allow describing polarization of electromagnetic waves in weakly anisotropic plasma. QIA stems directly from the Maxwell equations under assumption of weak anisotropy and has a form of coupled differential equations for the transverse components of the electromagnetic wave field. Being applied to high frequency (microwave or IR) electromagnetic waves in magnetized plasma, QIA describes combined action of Faraday and Cotton-Mouton phenomena. QIA takes into account curvature and torsion of the ray, describes normal modes conversion in the inhomogeneous plasma and allows specifying area of applicability of the method. In distinction to QIA, Stokes vector formalism (SVF) deals with quantities, quadratic in a wave field. It is shown (and this is the main result of the paper) that equation for Stokes vector evolution can be derived directly from QIA. This evidences deep unity of two seemingly different approaches. In fact QIA suggests somewhat more information than SVF; in particular, it describes the phases of both transverse components of the electromagnetic field, whereas SVF operates only with the phase difference. At last, the coupled equations of the quasi-isotropic approximation can be reduced to a single equation for complex polarization angle (CPA), which describes both the shape and orientation of the polarization ellipse. In turn, equation for CPA allows obtaining equations for traditional parameters of polarization ellipse, which in fact are equivalent to the equation for Stokes vector evolution. It is pointed out that every method under discussion has its own advantages plasma polarimetry.
Garvie, Marcus R; Burkardt, John; Morgan, Jeff
2015-03-01
We describe simple finite element schemes for approximating spatially extended predator-prey dynamics with the Holling type II functional response and logistic growth of the prey. The finite element schemes generalize 'Scheme 1' in the paper by Garvie (Bull Math Biol 69(3):931-956, 2007). We present user-friendly, open-source MATLAB code for implementing the finite element methods on arbitrary-shaped two-dimensional domains with Dirichlet, Neumann, Robin, mixed Robin-Neumann, mixed Dirichlet-Neumann, and Periodic boundary conditions. Users can download, edit, and run the codes from http://www.uoguelph.ca/~mgarvie/ . In addition to discussing the well posedness of the model equations, the results of numerical experiments are presented and demonstrate the crucial role that habitat shape, initial data, and the boundary conditions play in determining the spatiotemporal dynamics of predator-prey interactions. As most previous works on this problem have focussed on square domains with standard boundary conditions, our paper makes a significant contribution to the area.
NASA Astrophysics Data System (ADS)
Miller, Eric L.; Willsky, Alan S.
1996-01-01
In this paper, we present an approach to the nonlinear inverse scattering problem using the extended Born approximation (EBA) on the basis of methods from the fields of multiscale and statistical signal processing. By posing the problem directly in the wavelet transform domain, regularization is provided through the use of a multiscale prior statistical model. Using the maximum a posteriori (MAP) framework, we introduce the relative Cramér-Rao bound (RCRB) as a tool for analyzing the level of detail in a reconstruction supported by a data set as a function of the physics, the source-receiver geometry, and the nature of our prior information. The MAP estimate is determined using a novel implementation of the Levenberg-Marquardt algorithm in which the RCRB is used to achieve a substantial reduction in the effective dimensionality of the inversion problem with minimal degradation in performance. Additional reduction in complexity is achieved by taking advantage of the sparse structure of the matrices defining the EBA in scale space. An inverse electrical conductivity problem arising in geophysical prospecting applications provides the vehicle for demonstrating the analysis and algorithmic techniques developed in this paper.
Garvie, Marcus R; Burkardt, John; Morgan, Jeff
2015-03-01
We describe simple finite element schemes for approximating spatially extended predator-prey dynamics with the Holling type II functional response and logistic growth of the prey. The finite element schemes generalize 'Scheme 1' in the paper by Garvie (Bull Math Biol 69(3):931-956, 2007). We present user-friendly, open-source MATLAB code for implementing the finite element methods on arbitrary-shaped two-dimensional domains with Dirichlet, Neumann, Robin, mixed Robin-Neumann, mixed Dirichlet-Neumann, and Periodic boundary conditions. Users can download, edit, and run the codes from http://www.uoguelph.ca/~mgarvie/ . In addition to discussing the well posedness of the model equations, the results of numerical experiments are presented and demonstrate the crucial role that habitat shape, initial data, and the boundary conditions play in determining the spatiotemporal dynamics of predator-prey interactions. As most previous works on this problem have focussed on square domains with standard boundary conditions, our paper makes a significant contribution to the area. PMID:25616741
NASA Astrophysics Data System (ADS)
Wang, S.; Zhang, X. N.; Gao, D. D.; Liu, H. X.; Ye, J.; Li, L. R.
2016-08-01
As the solar photovoltaic (PV) power is applied extensively, more attentions are paid to the maintenance and fault diagnosis of PV power plants. Based on analysis of the structure of PV power station, the global partitioned gradually approximation method is proposed as a fault diagnosis algorithm to determine and locate the fault of PV panels. The PV array is divided into 16x16 blocks and numbered. On the basis of modularly processing of the PV array, the current values of each block are analyzed. The mean current value of each block is used for calculating the fault weigh factor. The fault threshold is defined to determine the fault, and the shade is considered to reduce the probability of misjudgments. A fault diagnosis system is designed and implemented with LabVIEW. And it has some functions including the data realtime display, online check, statistics, real-time prediction and fault diagnosis. Through the data from PV plants, the algorithm is verified. The results show that the fault diagnosis results are accurate, and the system works well. The validity and the possibility of the system are verified by the results as well. The developed system will be benefit for the maintenance and management of large scale PV array.
NASA Astrophysics Data System (ADS)
Skorupski, Krzysztof
2015-06-01
Black carbon particles soon after emission interact with organic and inorganic matter. The primary goal of this work was to approximate the accuracy of the DDA method in determining the optical properties of such composites. For the light scattering simulations the ADDA code was selected and the superposition T-Matrix code by Mackowski was used as the reference algorithm. The first part of the study was to compare alternative models of a single primary particle. When only one material is considered the largest averaged relative extinction error is associated with black carbon (δCext ≍ 2.8%). However, for inorganic and organic matter it is lowered to δCext ≍ 0.75%. There is no significant difference between spheres and ellipsoids with the same volume, and therefore, both of them can be used interchangeably. The next step was to investigate aggregates composed of Np = 50 primary particles. When the coating is omitted, the averaged relative extinction error is δCext ≍ 2.6%. Otherwise, it can be lower than δCext < 0.2%.
Scalable learning method for feedforward neural networks using minimal-enclosing-ball approximation.
Wang, Jun; Deng, Zhaohong; Luo, Xiaoqing; Jiang, Yizhang; Wang, Shitong
2016-06-01
Training feedforward neural networks (FNNs) is one of the most critical issues in FNNs studies. However, most FNNs training methods cannot be directly applied for very large datasets because they have high computational and space complexity. In order to tackle this problem, the CCMEB (Center-Constrained Minimum Enclosing Ball) problem in hidden feature space of FNN is discussed and a novel learning algorithm called HFSR-GCVM (hidden-feature-space regression using generalized core vector machine) is developed accordingly. In HFSR-GCVM, a novel learning criterion using L2-norm penalty-based ε-insensitive function is formulated and the parameters in the hidden nodes are generated randomly independent of the training sets. Moreover, the learning of parameters in its output layer is proved equivalent to a special CCMEB problem in FNN hidden feature space. As most CCMEB approximation based machine learning algorithms, the proposed HFSR-GCVM training algorithm has the following merits: The maximal training time of the HFSR-GCVM training is linear with the size of training datasets and the maximal space consumption is independent of the size of training datasets. The experiments on regression tasks confirm the above conclusions. PMID:27049545
Heats of Segregation of BCC Metals Using Ab Initio and Quantum Approximate Methods
NASA Technical Reports Server (NTRS)
Good, Brian; Chaka, Anne; Bozzolo, Guillermo
2003-01-01
Many multicomponent alloys exhibit surface segregation, in which the composition at or near a surface may be substantially different from that of the bulk. A number of phenomenological explanations for this tendency have been suggested, involving, among other things, differences among the components' surface energies, molar volumes, and heats of solution. From a theoretical standpoint, the complexity of the problem has precluded a simple, unified explanation, thus preventing the development of computational tools that would enable the identification of the driving mechanisms for segregation. In that context, we investigate the problem of surface segregation in a variety of bcc metal alloys by computing dilute-limit heats of segregation using both the quantum-approximate energy method of Bozzolo, Ferrante and Smith (BFS), and all-electron density functional theory. In addition, the composition dependence of the heats of segregation is investigated using a BFS-based Monte Carlo procedure, and, for selected cases of interest, density functional calculations. Results are discussed in the context of a simple picture that describes segregation behavior as the result of a competition between size mismatch and alloying effects
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.
2010-01-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications. PMID:20577570
NASA Astrophysics Data System (ADS)
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.
2010-07-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H; Miller, Cass T
2010-07-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.
NASA Astrophysics Data System (ADS)
Meshgi, Ali; Schmitter, Petra; Babovic, Vladan; Chui, Ting Fong May
2014-11-01
Developing reliable methods to estimate stream baseflow has been a subject of interest due to its importance in catchment response and sustainable watershed management. However, to date, in the absence of complex numerical models, baseflow is most commonly estimated using statistically derived empirical approaches that do not directly incorporate physically-meaningful information. On the other hand, Artificial Intelligence (AI) tools such as Genetic Programming (GP) offer unique capabilities to reduce the complexities of hydrological systems without losing relevant physical information. This study presents a simple-to-use empirical equation to estimate baseflow time series using GP so that minimal data is required and physical information is preserved. A groundwater numerical model was first adopted to simulate baseflow for a small semi-urban catchment (0.043 km2) located in Singapore. GP was then used to derive an empirical equation relating baseflow time series to time series of groundwater table fluctuations, which are relatively easily measured and are physically related to baseflow generation. The equation was then generalized for approximating baseflow in other catchments and validated for a larger vegetation-dominated basin located in the US (24 km2). Overall, this study used GP to propose a simple-to-use equation to predict baseflow time series based on only three parameters: minimum daily baseflow of the entire period, area of the catchment and groundwater table fluctuations. It serves as an alternative approach for baseflow estimation in un-gauged systems when only groundwater table and soil information is available, and is thus complementary to other methods that require discharge measurements.
NASA Astrophysics Data System (ADS)
Jang, Seogjoo; Voth, Gregory A.
1999-08-01
Several methods to approximately evolve path integral centroid variables in real time are presented in this paper, the first of which, the centroid molecular dynamics (CMD) method, is recast into the new formalism of the preceding paper and thereby derived. The approximations involved in the CMD method are thus fully characterized by mathematical derivations. Additional new approaches are also presented: centroid Hamiltonian dynamics (CHD), linearized quantum dynamics (LQD), and a perturbative correction of the LQD method (PT-LQD). The CHD method is shown to be a variation of the CMD method which conserves the approximate time dependent centroid Hamiltonian. The LQD method amounts to a linear approximation for the quantum Liouville equation, while the PT-LQD method includes a perturbative correction to the LQD method. All of these approaches are then tested for the equilibrium position correlation functions of three different one-dimensional nondissipative model systems, and it is shown that certain quantum effects are accounted for by all of them, while others, such as the long time coherence characteristic of low-dimensional nondissipative systems, are not. The CMD method is found to be consistently better than the LQD method, while the PT-LQD method improves the latter and is better than the CMD method in most cases. The CHD method gives results complementary to those of the CMD method.
NASA Technical Reports Server (NTRS)
Pratt, D. T.
1984-01-01
Conventional algorithms for the numerical integration of ordinary differential equations (ODEs) are based on the use of polynomial functions as interpolants. However, the exact solutions of stiff ODEs behave like decaying exponential functions, which are poorly approximated by polynomials. An obvious choice of interpolant are the exponential functions themselves, or their low-order diagonal Pade (rational function) approximants. A number of explicit, A-stable, integration algorithms were derived from the use of a three-parameter exponential function as interpolant, and their relationship to low-order, polynomial-based and rational-function-based implicit and explicit methods were shown by examining their low-order diagonal Pade approximants. A robust implicit formula was derived by exponential fitting the trapezoidal rule. Application of these algorithms to integration of the ODEs governing homogenous, gas-phase chemical kinetics was demonstrated in a developmental code CREK1D, which compares favorably with the Gear-Hindmarsh code LSODE in spite of the use of a primitive stepsize control strategy.
A novel window based method for approximating the Hausdorff in 3D range imagery.
Koch, Mark William
2004-10-01
Matching a set of 3D points to another set of 3D points is an important part of any 3D object recognition system. The Hausdorff distance is known for it robustness in the face of obscuration, clutter, and noise. We show how to approximate the 3D Hausdorff fraction with linear time complexity and quadratic space complexity. We empirically demonstrate that the approximation is very good when compared to actual Hausdorff distances.
Approximate analysis method for statistical properties of seismic response of secondary system
Aoki, Shigeru
1996-12-01
In this paper, effectiveness of a stationary approximation is examined. The mean square response and the first excursion probability of the secondary system such as pipings and mechanical equipment installed in the primary system such as building subjected to nonstationary random excitation are obtained. Results obtained by stationary approximation are compared with those obtained by nonstationary analysis for various values of damping ratio, natural period and mass ratio of the secondary system to the primary system.
NASA Astrophysics Data System (ADS)
Lin, Xue-lei; Lu, Xin; Ng, Micheal K.; Sun, Hai-Wei
2016-10-01
A fast accurate approximation method with multigrid solver is proposed to solve a two-dimensional fractional sub-diffusion equation. Using the finite difference discretization of fractional time derivative, a block lower triangular Toeplitz matrix is obtained where each main diagonal block contains a two-dimensional matrix for the Laplacian operator. Our idea is to make use of the block ɛ-circulant approximation via fast Fourier transforms, so that the resulting task is to solve a block diagonal system, where each diagonal block matrix is the sum of a complex scalar times the identity matrix and a Laplacian matrix. We show that the accuracy of the approximation scheme is of O (ɛ). Because of the special diagonal block structure, we employ the multigrid method to solve the resulting linear systems. The convergence of the multigrid method is studied. Numerical examples are presented to illustrate the accuracy of the proposed approximation scheme and the efficiency of the proposed solver.
Optimal approximation method to characterize the resource trade-off functions for media servers
NASA Astrophysics Data System (ADS)
Chang, Ray-I.
1999-08-01
We have proposed an algorithm to smooth the transmission of the pre-recorded VBR media stream. It takes O(n) time complexity, where n is large, this algorithm is not suitable for online resource management and admission control in media servers. To resolve this drawback, we have explored the optimal tradeoff among resources by an O(nlogn) algorithm. Based on the pre-computed resource tradeoff function, the resource management and admission control procedure is as simple as table hashing. However, this approach requires O(n) space to store and maintain the resource tradeoff function. In this paper, while giving some extra resources, a linear-time algorithm is proposed to approximate the resource tradeoff function by piecewise line segments. We can prove that the number of line segments in the obtained approximation function is minimized for the given extra resources. The proposed algorithm has been applied to approximate the bandwidth-buffer-tradeoff function of the real-world Star War movie. While an extra 0.1 Mbps bandwidth is given, the storage space required for the approximation function is over 2000 times smaller than that required for the original function. While an extra 10 KB buffer is given, the storage space for the approximation function is over 2200 over times smaller than that required for the original function. The proposed algorithm is really useful for resource management and admission control in real-world media servers.
A Novel Method of the Generalized Interval-Valued Fuzzy Rough Approximation Operators
Xue, Tianyu; Xue, Zhan'ao; Cheng, Huiru; Liu, Jie; Zhu, Tailong
2014-01-01
Rough set theory is a suitable tool for dealing with the imprecision, uncertainty, incompleteness, and vagueness of knowledge. In this paper, new lower and upper approximation operators for generalized fuzzy rough sets are constructed, and their definitions are expanded to the interval-valued environment. Furthermore, the properties of this type of rough sets are analyzed. These operators are shown to be equivalent to the generalized interval fuzzy rough approximation operators introduced by Dubois, which are determined by any interval-valued fuzzy binary relation expressed in a generalized approximation space. Main properties of these operators are discussed under different interval-valued fuzzy binary relations, and the illustrative examples are given to demonstrate the main features of the proposed operators. PMID:25162065
Saitoh, T.S.; Hoshi, Akira
1999-07-01
numerical methods (e.g. Saitoh and Kato, 1994). In addition, close-contact melting heat transfer characteristics including melt flow in the liquid film under inner wall temperature distribution were analyzed and simple approximate equations were already presented by Saitoh and Hoshi (1997). In this paper, the authors will propose an analytical solution on combined close-contact and natural convection melting in horizontal cylindrical and spherical capsules, which is useful for the practical capsule bed LHTES system.
Closure to new results for an approximate method for calculating two-dimensional furrow infiltration
Technology Transfer Automated Retrieval System (TEKTRAN)
In a discussion paper, Ebrahimian and Noury (2015) raised several concerns about an approximate solution to the two-dimensional Richards equation presented by Bautista et al (2014). The solution is based on a procedure originally proposed by Warrick et al. (2007). Such a solution is of practical i...
Deniz, Furkan Nur; Alagoz, Baris Baykant; Tan, Nusret; Atherton, Derek P
2016-05-01
This paper introduces an integer order approximation method for numerical implementation of fractional order derivative/integrator operators in control systems. The proposed method is based on fitting the stability boundary locus (SBL) of fractional order derivative/integrator operators and SBL of integer order transfer functions. SBL defines a boundary in the parametric design plane of controller, which separates stable and unstable regions of a feedback control system and SBL analysis is mainly employed to graphically indicate the choice of controller parameters which result in stable operation of the feedback systems. This study reveals that the SBL curves of fractional order operators can be matched with integer order models in a limited frequency range. SBL fitting method provides straightforward solutions to obtain an integer order model approximation of fractional order operators and systems according to matching points from SBL of fractional order systems in desired frequency ranges. Thus, the proposed method can effectively deal with stability preservation problems of approximate models. Illustrative examples are given to show performance of the proposed method and results are compared with the well-known approximation methods developed for fractional order systems. The integer-order approximate modeling of fractional order PID controllers is also illustrated for control applications. PMID:26876378
The Investigation of Optimal Discrete Approximations for Real Time Flight Simulations
NASA Technical Reports Server (NTRS)
Parrish, E. A.; Mcvey, E. S.; Cook, G.; Henderson, K. C.
1976-01-01
The results are presented of an investigation of discrete approximations for real time flight simulation. Major topics discussed include: (1) consideration of the particular problem of approximation of continuous autopilots by digital autopilots; (2) use of Bode plots and synthesis of transfer functions by asymptotic fits in a warped frequency domain; (3) an investigation of the various substitution formulas, including the effects of nonlinearities; (4) use of pade approximation to the solution of the matrix exponential arising from the discrete state equations; and (5) an analytical integration of the state equation using interpolated input.
NASA Astrophysics Data System (ADS)
Erhard, Jannis; Bleiziffer, Patrick; Görling, Andreas
2016-09-01
A power series approximation for the correlation kernel of time-dependent density-functional theory is presented. Using this approximation in the adiabatic-connection fluctuation-dissipation (ACFD) theorem leads to a new family of Kohn-Sham methods. The new methods yield reaction energies and barriers of unprecedented accuracy and enable a treatment of static (strong) correlation with an accuracy of high-level multireference configuration interaction methods but are single-reference methods allowing for a black-box-like handling of static correlation. The new methods exhibit a better scaling of the computational effort with the system size than rivaling wave-function-based electronic structure methods. Moreover, the new methods do not suffer from the problem of singularities in response functions plaguing previous ACFD methods and therefore are applicable to any type of electronic system.
Belonogaya, Ekaterina S; Tyukhtin, Andrey V; Galyamin, Sergey N
2013-04-01
An approximate method for calculating the radiation from a moving charge in the presence of a dielectric object is developed. The method is composed of two steps. The first step is calculation of the field in the medium without considering the external boundaries of the object, and the second step is an approximate (ray-optical) calculation of the wave propagation outside the object. As a test problem, we consider the case of a charge crossing a dielectric plate. Computations of the field are performed using exact and approximate methods. It is shown that the results agree well. Additionally, we apply the method under consideration to the case of a cone-shaped object with a vacuum channel. The radiation energy spectral density as a function of the location of the observation point and the problem's parameters is given. In particular, the convergent radiation effect is described. PMID:23679539
A Diffusion Approximation and Numerical Methods for Adaptive Neuron Models with Stochastic Inputs
Rosenbaum, Robert
2016-01-01
Characterizing the spiking statistics of neurons receiving noisy synaptic input is a central problem in computational neuroscience. Monte Carlo approaches to this problem are computationally expensive and often fail to provide mechanistic insight. Thus, the field has seen the development of mathematical and numerical approaches, often relying on a Fokker-Planck formalism. These approaches force a compromise between biological realism, accuracy and computational efficiency. In this article we develop an extension of existing diffusion approximations to more accurately approximate the response of neurons with adaptation currents and noisy synaptic currents. The implementation refines existing numerical schemes for solving the associated Fokker-Planck equations to improve computationally efficiency and accuracy. Computer code implementing the developed algorithms is made available to the public. PMID:27148036
A Diffusion Approximation and Numerical Methods for Adaptive Neuron Models with Stochastic Inputs.
Rosenbaum, Robert
2016-01-01
Characterizing the spiking statistics of neurons receiving noisy synaptic input is a central problem in computational neuroscience. Monte Carlo approaches to this problem are computationally expensive and often fail to provide mechanistic insight. Thus, the field has seen the development of mathematical and numerical approaches, often relying on a Fokker-Planck formalism. These approaches force a compromise between biological realism, accuracy and computational efficiency. In this article we develop an extension of existing diffusion approximations to more accurately approximate the response of neurons with adaptation currents and noisy synaptic currents. The implementation refines existing numerical schemes for solving the associated Fokker-Planck equations to improve computationally efficiency and accuracy. Computer code implementing the developed algorithms is made available to the public. PMID:27148036
NASA Technical Reports Server (NTRS)
Banks, H. T.; Smith, Ralph C.; Wang, Yun
1994-01-01
Based on a distributed parameter model for vibrations, an approximate finite dimensional dynamic compensator is designed to suppress vibrations (multiple modes with a broad band of frequencies) of a circular plate with Kelvin-Voigt damping and clamped boundary conditions. The control is realized via piezoceramic patches bonded to the plate and is calculated from information available from several pointwise observed state variables. Examples from computational studies as well as use in laboratory experiments are presented to demonstrate the effectiveness of this design.
NASA Astrophysics Data System (ADS)
Teodoro, M. F.
2012-09-01
We are particularly interested in the numerical solution of the functional differential equations with symmetric delay and advance. In this work, we consider a nonlinear forward-backward equation, the Fitz Hugh-Nagumo equation. It is presented a scheme which extends the algorithm introduced in [1]. A computational method using Newton's method, finite element method and method of steps is developped.
Liang, Xiao; Khaliq, Abdul Q. M.; Xing, Yulong
2015-01-23
In this paper, we study a local discontinuous Galerkin method combined with fourth order exponential time differencing Runge-Kutta time discretization and a fourth order conservative method for solving the nonlinear Schrödinger equations. Based on different choices of numerical fluxes, we propose both energy-conserving and energy-dissipative local discontinuous Galerkin methods, and have proven the error estimates for the semi-discrete methods applied to linear Schrödinger equation. The numerical methods are proven to be highly efficient and stable for long-range soliton computations. Finally, extensive numerical examples are provided to illustrate the accuracy, efficiency and reliability of the proposed methods.
NASA Astrophysics Data System (ADS)
Bisetti, Fabrizio
2012-06-01
Recent trends in hydrocarbon fuel research indicate that the number of species and reactions in chemical kinetic mechanisms is rapidly increasing in an effort to provide predictive capabilities for fuels of practical interest. In order to cope with the computational cost associated with the time integration of stiff, large chemical systems, a novel approach is proposed. The approach combines an exponential integrator and Krylov subspace approximations to the exponential function of the Jacobian matrix. The components of the approach are described in detail and applied to the ignition of stoichiometric methane-air and iso-octane-air mixtures, here described by two widely adopted chemical kinetic mechanisms. The approach is found to be robust even at relatively large time steps and the global error displays a nominal third-order convergence. The performance of the approach is improved by utilising an adaptive algorithm for the selection of the Krylov subspace size, which guarantees an approximation to the matrix exponential within user-defined error tolerance. The Krylov projection of the Jacobian matrix onto a low-dimensional space is interpreted as a local model reduction with a well-defined error control strategy. Finally, the performance of the approach is discussed with regard to the optimal selection of the parameters governing the accuracy of its individual components.
A simple approximate method for obtaining spanwise lift distributions over swept wings
NASA Technical Reports Server (NTRS)
Diederich, Franklin W
1948-01-01
It is shown how Schrenk's empirical method of estimating the lift distribution over straight wings can be adapted to swept wings by replacing the elliptical distribution by a new "ideal" distribution which varies with sweep.The application of the method is discussed in detail and several comparisons are made to show the agreement of the proposed method with more rigorous ones. It is shown how first-order compressibility corrections applicable to subcritical speeds may be included in this method.
NASA Astrophysics Data System (ADS)
Gökdoğan, Ahmet; Merdan, Mehmet; Yildirim, Ahmet
2012-01-01
The goal of this study is presented a reliable algorithm based on the standard differential transformation method (DTM), which is called the multi-stage differential transformation method (MsDTM) for solving Hantavirus infection model. The results obtanied by using MsDTM are compared to those obtained by using the Runge-Kutta method (R-K-method). The proposed technique is a hopeful tool to solving for a long time intervals in this kind of systems.
NASA Technical Reports Server (NTRS)
Bailey, Harry E.; Beam, Richard M.
1991-01-01
Finite-difference approximations for steady-state compressible Navier-Stokes equations, whose two spatial dimensions are written in generalized curvilinear coordinates and strong conservation-law form, are presently solved by means of Newton's method in order to obtain a lifting-airfoil flow field under subsonic and transonnic conditions. In addition to ascertaining the computational requirements of an initial guess ensuring convergence and the degree of computational efficiency obtainable via the approximate Newton method's freezing of the Jacobian matrices, attention is given to the need for auxiliary methods assessing the temporal stability of steady-state solutions. It is demonstrated that nonunique solutions of the finite-difference equations are obtainable by Newton's method in conjunction with a continuation method.
NASA Astrophysics Data System (ADS)
Heiskala, Juha; Kolehmainen, Ville; Tarvainen, Tanja; Kaipio, Jari. P.; Arridge, Simon R.
2012-09-01
Diffuse optical tomography can image the hemodynamic response to an activation in the human brain by measuring changes in optical absorption of near-infrared light. Since optodes placed on the scalp are used, the measurements are very sensitive to changes in optical attenuation in the scalp, making optical brain activation imaging susceptible to artifacts due to effects of systemic circulation and local circulation of the scalp. We propose to use the Bayesian approximation error approach to reduce these artifacts. The feasibility of the approach is evaluated using simulated brain activations. When a localized cortical activation occurs simultaneously with changes in the scalp blood flow, these changes can mask the cortical activity causing spurious artifacts. We show that the proposed approach is able to recover from these artifacts even when the nominal tissue properties are not well known.
Kim, S.
1994-12-31
Parallel iterative procedures based on domain decomposition techniques are defined and analyzed for the numerical solution of wave propagation by finite element and finite difference methods. For finite element methods, in a Lagrangian framework, an efficient way for choosing the algorithm parameter as well as the algorithm convergence are indicated. Some heuristic arguments for finding the algorithm parameter for finite difference schemes are addressed. Numerical results are presented to indicate the effectiveness of the methods.
Approximate method for calculating transonic flow about lifting wing-body configurations
NASA Technical Reports Server (NTRS)
Barnwell, R. W.
1976-01-01
The three-dimensional problem of transonic flow about lifting wing-body configurations is reduced to a two-variable computational problem with the method of matched asymptotic expansions. The computational problem is solved with the method of relaxation. The method accounts for leading-edge separation, the presence of shock waves, and the presence of solid, slotted, or porous tunnel walls. The Mach number range of the method extends from zero to the supersonic value at which the wing leading edge becomes sonic. A modified form of the transonic area rule which accounts for the effect of lift is developed. This effect is explained from simple physical considerations.
Freeze, G.A.; Larson, K.W.; Davies, P.B.
1995-10-01
Eight alternative methods for approximating salt creep and disposal room closure in a multiphase flow model of the Waste Isolation Pilot Plant (WIPP) were implemented and evaluated: Three fixed-room geometries three porosity functions and two fluid-phase-salt methods. The pressure-time-porosity line interpolation method is the method used in current WIPP Performance Assessment calculations. The room closure approximation methods were calibrated against a series of room closure simulations performed using a creep closure code, SANCHO. The fixed-room geometries did not incorporate a direct coupling between room void volume and room pressure. The two porosity function methods that utilized moles of gas as an independent parameter for closure coupling. The capillary backstress method was unable to accurately simulate conditions of re-closure of the room. Two methods were found to be accurate enough to approximate the effects of room closure; the boundary backstress method and pressure-time-porosity line interpolation. The boundary backstress method is a more reliable indicator of system behavior due to a theoretical basis for modeling salt deformation as a viscous process. It is a complex method and a detailed calibration process is required. The pressure lines method is thought to be less reliable because the results were skewed towards SANCHO results in simulations where the sequence of gas generation was significantly different from the SANCHO gas-generation rate histories used for closure calibration. This limitation in the pressure lines method is most pronounced at higher gas-generation rates and is relatively insignificant at lower gas-generation rates. Due to its relative simplicity, the pressure lines method is easier to implement in multiphase flow codes and simulations have a shorter execution time.
NASA Technical Reports Server (NTRS)
Meador, W. E.; Weaver, W. R.
1980-01-01
Existing two-stream approximations to radiative transfer theory for particulate media are shown to be represented by identical forms of coupled differential equations if the intensity is replaced by integrals of the intensity over hemispheres. One set of solutions thus suffices for all methods and provides convenient analytical comparisons. The equations also suggest modifications of the standard techniques so as to duplicate exact solutions for thin atmospheres and thus permit accurate determinations of the effects of typical aerosol layers. Numerical results for the plane albedos of plane-parallel atmospheres are given for conventional and modified Eddington approximations, conventional and modified two-point quadrature schemes, the hemispheric-constant method and the delta-function method, all for comparison with accurate discrete-ordinate solutions. A new two-stream approximation is introduced that reduces to the modified Eddington approximation in the limit of isotropic phase functions and to the exact solution in the limit of extreme anisotropic scattering. Comparisons of plane albedos and transmittances show the new method to be generally superior over a wide range of atmospheric conditions (including cloud and aerosol layers), especially in the case of nonconservative scattering.
A 3D finite element ALE method using an approximate Riemann solution
Chiravalle, V. P.; Morgan, N. R.
2016-08-09
Arbitrary Lagrangian–Eulerian finite volume methods that solve a multidimensional Riemann-like problem at the cell center in a staggered grid hydrodynamic (SGH) arrangement have been proposed. This research proposes a new 3D finite element arbitrary Lagrangian–Eulerian SGH method that incorporates a multidimensional Riemann-like problem. Here, two different Riemann jump relations are investigated. A new limiting method that greatly improves the accuracy of the SGH method on isentropic flows is investigated. A remap method that improves upon a well-known mesh relaxation and remapping technique in order to ensure total energy conservation during the remap is also presented. Numerical details and test problemmore » results are presented.« less
Integral approximants for functions of higher monodromic dimension
Baker, G.A. Jr.
1987-01-01
In addition to the description of multiform, locally analytic functions as covering a many sheeted version of the complex plane, Riemann also introduced the notion of considering them as describing a space whose ''monodromic'' dimension is the number of linearly independent coverings by the monogenic analytic function at each point of the complex plane. I suggest that this latter concept is natural for integral approximants (sub-class of Hermite-Pade approximants) and discuss results for both ''horizontal'' and ''diagonal'' sequences of approximants. Some theorems are now available in both cases and make clear the natural domain of convergence of the horizontal sequences is a disk centered on the origin and that of the diagonal sequences is a suitably cut complex-plane together with its identically cut pendant Riemann sheets. Some numerical examples have also been computed.
High order filtering methods for approximating hyberbolic systems of conservation laws
NASA Technical Reports Server (NTRS)
Lafon, F.; Osher, S.
1990-01-01
In the computation of discontinuous solutions of hyperbolic systems of conservation laws, the recently developed essentially non-oscillatory (ENO) schemes appear to be very useful. However, they are computationally costly compared to simple central difference methods. A filtering method which is developed uses simple central differencing of arbitrarily high order accuracy, except when a novel local test indicates the development of spurious oscillations. At these points, the full ENO apparatus is used, maintaining the high order of accuracy, but removing spurious oscillations. Numerical results indicate the success of the method. High order of accuracy was obtained in regions of smooth flow without spurious oscillations for a wide range of problems and a significant speed up of generally a factor of almost three over the full ENO method.
An approximate-reasoning-based method for screening flammable gas tanks
Eisenhawer, S.W.; Bott, T.F.; Smith, R.E.
1998-03-01
High-level waste (HLW) produces flammable gases as a result of radiolysis and thermal decomposition of organics. Under certain conditions, these gases can accumulate within the waste for extended periods and then be released quickly into the dome space of the storage tank. As part of the effort to reduce the safety concerns associated with flammable gas in HLW tanks at Hanford, a flammable gas watch list (FGWL) has been established. Inclusion on the FGWL is based on criteria intended to measure the risk associated with the presence of flammable gas. It is important that all high-risk tanks be identified with high confidence so that they may be controlled. Conversely, to minimize operational complexity, the number of tanks on the watchlist should be reduced as near to the true number of flammable risk tanks as the current state of knowledge will support. This report presents an alternative to existing approaches for FGWL screening based on the theory of approximate reasoning (AR) (Zadeh 1976). The AR-based model emulates the inference process used by an expert when asked to make an evaluation. The FGWL model described here was exercised by performing two evaluations. (1) A complete tank evaluation where the entire algorithm is used. This was done for two tanks, U-106 and AW-104. U-106 is a single shell tank with large sludge and saltcake layers. AW-104 is a double shell tank with over one million gallons of supernate. Both of these tanks had failed the screening performed by Hodgson et al. (2) Partial evaluations using a submodule for the predictor likelihood for all of the tanks on the FGWL that had been flagged previously by Whitney (1995).
Flux vector splitting and approximate Newton methods. [for solution of steady Euler equations
NASA Technical Reports Server (NTRS)
Jespersen, D. C.; Pulliam, T. H.
1983-01-01
In the present investigation, the basic approach is employed to view an iterative scheme as Newton's method or as a modified Newton's method. Attention is given to various modified Newton methods which can arise from differencing schemes for the Euler equations. Flux vector splitting is considered as the basic spatial differencing technique. This technique is based on the partition of a flux vector into groups which have certain properties. The Euler equations fluxes can be split into two groups, the first group having a flux Jacobian with all positive eigenvalues, and the second group having a flux Jacobian with all negative eigenvalues. Flux vector splitting based on a velocity-sound speed split is considered along with the use of numerical techniques to analyze nonlinear systems, and the steady Euler equations for quasi-one-dimensional flow in a nozzle. Results are given for steady flows with shocks.
High order filtering methods for approximating hyperbolic systems of conservation laws
NASA Technical Reports Server (NTRS)
Lafon, F.; Osher, S.
1991-01-01
The essentially nonoscillatory (ENO) schemes, while potentially useful in the computation of discontinuous solutions of hyperbolic conservation-law systems, are computationally costly relative to simple central-difference methods. A filtering technique is presented which employs central differencing of arbitrarily high-order accuracy except where a local test detects the presence of spurious oscillations and calls upon the full ENO apparatus to remove them. A factor-of-three speedup is thus obtained over the full-ENO method for a wide range of problems, with high-order accuracy in regions of smooth flow.
NASA Astrophysics Data System (ADS)
Duan, Beiping; Zheng, Zhoushun; Cao, Wen
2016-08-01
In this paper, we revisit two spectral approximations, including truncated approximation and interpolation for Caputo fractional derivative. The two approaches have been studied to approximate Riemann-Liouville (R-L) fractional derivative by Chen et al. and Zayernouri et al. respectively in their most recent work. For truncated approximation the reconsideration partly arises from the difference between fractional derivative in R-L sense and Caputo sense: Caputo fractional derivative requires higher regularity of the unknown than R-L version. Another reason for the reconsideration is that we distinguish the differential order of the unknown with the index of Jacobi polynomials, which is not presented in the previous work. Also we provide a way to choose the index when facing multi-order problems. By using generalized Hardy's inequality, the gap between the weighted Sobolev space involving Caputo fractional derivative and the classical weighted space is bridged, then the optimal projection error is derived in the non-uniformly Jacobi-weighted Sobolev space and the maximum absolute error is presented as well. For the interpolation, analysis of interpolation error was not given in their work. In this paper we build the interpolation error in non-uniformly Jacobi-weighted Sobolev space by constructing fractional inverse inequality. With combining collocation method, the approximation technique is applied to solve fractional initial-value problems (FIVPs). Numerical examples are also provided to illustrate the effectiveness of this algorithm.
Window-based method for approximating the Hausdorff in three-dimensional range imagery
Koch, Mark W.
2009-06-02
One approach to pattern recognition is to use a template from a database of objects and match it to a probe image containing the unknown. Accordingly, the Hausdorff distance can be used to measure the similarity of two sets of points. In particular, the Hausdorff can measure the goodness of a match in the presence of occlusion, clutter, and noise. However, existing 3D algorithms for calculating the Hausdorff are computationally intensive, making them impractical for pattern recognition that requires scanning of large databases. The present invention is directed to a new method that can efficiently, in time and memory, compute the Hausdorff for 3D range imagery. The method uses a window-based approach.
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki; Bey, Kim S.; Hou, Gene J. W.
2004-01-01
A recent paper is generalized to a case where the spatial region is taken in R(sup 3). The region is assumed to be a thin body, such as a panel on the wing or fuselage of an aerospace vehicle. The traditional h- as well as hp-finite element methods are applied to the surface defined in the x - y variables, while, through the thickness, the technique of the p-element is employed. Time and spatial discretization scheme based upon an assumption of certain weak singularity of double vertical line u(sub t) double vertical line 2, is used to derive an optimal a priori error estimate for the current method.
Köhler, Christof; Frauenheim, Thomas; Hourahine, Ben; Seifert, Gotthard; Sternberg, Michael
2007-07-01
We report benchmark calculations of the density functional based tight-binding method concerning the magnetic properties of small iron clusters (Fe2 to Fe5) and the Fe13 icosahedron. Energetics and stability with respect to changes of cluster geometry of collinear and noncollinear spin configurations are in good agreement with ab initio results. The inclusion of spin-orbit coupling has been tested for the iron dimer. PMID:17428041
Accurate finite difference methods for time-harmonic wave propagation
NASA Technical Reports Server (NTRS)
Harari, Isaac; Turkel, Eli
1994-01-01
Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.
2007-01-01
Several modifications that have been made to the NDDO core-core interaction term and to the method of parameter optimization are described. These changes have resulted in a more complete parameter optimization, called PM6, which has, in turn, allowed 70 elements to be parameterized. The average unsigned error (AUE) between calculated and reference heats of formation for 4,492 species was 8.0 kcal mol−1. For the subset of 1,373 compounds involving only the elements H, C, N, O, F, P, S, Cl, and Br, the PM6 AUE was 4.4 kcal mol−1. The equivalent AUE for other methods were: RM1: 5.0, B3LYP 6–31G*: 5.2, PM5: 5.7, PM3: 6.3, HF 6–31G*: 7.4, and AM1: 10.0 kcal mol−1. Several long-standing faults in AM1 and PM3 have been corrected and significant improvements have been made in the prediction of geometries. Figure Calculated structure of the complex ion [Ta6Cl12]2+ (footnote): Reference value in parenthesis Electronic supplementary material The online version of this article (doi:10.1007/s00894-007-0233-4) contains supplementary material, which is available to authorized users. PMID:17828561
Maliassov, S.Y.
1996-12-31
An approach to the construction of an iterative method for solving systems of linear algebraic equations arising from nonconforming finite element discretizations with nonmatching grids for second order elliptic boundary value problems with anisotropic coefficients is considered. The technique suggested is based on decomposition of the original domain into nonoverlapping subdomains. The elliptic problem is presented in the macro-hybrid form with Lagrange multipliers at the interfaces between subdomains. A block diagonal preconditioner is proposed which is spectrally equivalent to the original saddle point matrix and has the optimal order of arithmetical complexity. The preconditioner includes blocks for preconditioning subdomain and interface problems. It is shown that constants of spectral equivalence axe independent of values of coefficients and mesh step size.
Nakano, Masayoshi Minami, Takuya Fukui, Hitoshi Yoneda, Kyohei Shigeta, Yasuteru Kishi, Ryohei; Champagne, Benoît; Botek, Edith
2015-01-22
We develop a novel method for the calculation and the analysis of the one-electron reduced densities in open-shell molecular systems using the natural orbitals and approximate spin projected occupation numbers obtained from broken symmetry (BS), i.e., spin-unrestricted (U), density functional theory (DFT) calculations. The performance of this approximate spin projection (ASP) scheme is examined for the diradical character dependence of the second hyperpolarizability (γ) using several exchange-correlation functionals, i.e., hybrid and long-range corrected UDFT schemes. It is found that the ASP-LC-UBLYP method with a range separating parameter μ = 0.47 reproduces semi-quantitatively the strongly-correlated [UCCSD(T)] result for p-quinodimethane, i.e., the γ variation as a function of the diradical character.
NASA Astrophysics Data System (ADS)
Nakano, Masayoshi; Minami, Takuya; Fukui, Hitoshi; Yoneda, Kyohei; Shigeta, Yasuteru; Kishi, Ryohei; Champagne, Benoıît; Botek, Edith
2015-01-01
We develop a novel method for the calculation and the analysis of the one-electron reduced densities in open-shell molecular systems using the natural orbitals and approximate spin projected occupation numbers obtained from broken symmetry (BS), i.e., spin-unrestricted (U), density functional theory (DFT) calculations. The performance of this approximate spin projection (ASP) scheme is examined for the diradical character dependence of the second hyperpolarizability (γ) using several exchange-correlation functionals, i.e., hybrid and long-range corrected UDFT schemes. It is found that the ASP-LC-UBLYP method with a range separating parameter μ = 0.47 reproduces semi-quantitatively the strongly-correlated [UCCSD(T)] result for p-quinodimethane, i.e., the γ variation as a function of the diradical character.
NASA Astrophysics Data System (ADS)
Husein, Andri S.; Cari, C.; Suparmi, A.; Hadi, Miftachul
2016-03-01
We investigate the propagation of electromagnetic waves in transverse magnetic (TM) mode through the structure of materials interface that have permittivity or permeability profile graded positive-negative using asymptotic iteration method (AIM). As the optical character of materials, the permittivity and the permeability profiles have been designed from constant or hyperbolic functions. In this work we show the approximate solution to the distribution of the magnetic field and the wave vector of the eight models of materials.
NASA Astrophysics Data System (ADS)
Viquerat, Jonathan; Lanteri, Stéphane
2016-01-01
During the last ten years, the discontinuous Galerkin time-domain (DGTD) method has progressively emerged as a viable alternative to well established finite-difference time-domain (FDTD) and finite-element time-domain (FETD) methods for the numerical simulation of electromagnetic wave propagation problems in the time-domain. The method is now actively studied in various application contexts including those requiring to model light/matter interactions on the nanoscale. Several recent works have demonstrated the viability of the DGDT method for nanophotonics. In this paper we further demonstrate the capabilities of the method for the simulation of near-field plasmonic interactions by considering more particularly the possibility of combining the use of a locally refined conforming tetrahedral mesh with a local adaptation of the approximation order.
NASA Astrophysics Data System (ADS)
Li, Li; Hu, Yujin; Wang, Xuelin
2013-07-01
As we know, it is difficult and unnecessary to obtain all the eigenpairs of a large-scaled viscoelastic (nonviscous or hysteretic) damping systems, which means that the mode truncation scheme is generally used and the mode-truncated error is therefore introduced. This study is aimed at eliminating the influence of the unavailable modes on the dynamic response of MDOF systems with viscoelastic hereditary terms. The energy dissipation terms of the system depend on the past history of motion via convolution integrals over some kernel functions. Therefore, the system is a nonviscously damped system, which has been considered as the most generalized damping model within the scope of a linear mechanical analysis. To approximate frequency response function (FRF) matrix and response without using the unavailable modes, we suggest two methods, which attempt to approximate the influence of the unavailable modes in terms of the lower modes and system matrices by using the first one or two terms of Neumann expansion of the contribution of the unavailable modes. In contrast with the FRF matrix approximated in terms of the first two terms of Neumann expansion, these procedures cannot be extended to further high order terms since all of them will be affected by the frequency-dependent variation of damping matrix from previous terms. Finally, an example is shown that the two presented methods can make the mode-truncated error reduce and may be used to approximate the influence of nonviscous modes contributed to FRF matrix due to the fact that the nonviscous modes are difficult to be obtained accurately even if a small scaled model is used for some eigensolution methods.
NASA Astrophysics Data System (ADS)
Higuchi, Katsuhiko; Hamal, Dipendra Bahadur; Higuchi, Masahiko
2015-02-01
We present a relativistic tight-binding (TB) approximation method that is applicable to actual crystalline materials immersed in a uniform magnetic field. The magnetic Bloch theorem is used to make the dimensions of the Hamiltonian matrix finite. In addition, by means of the perturbation theory, the magnetic hopping integral that appears in the Hamiltonian matrix is reasonably approximated as the relativistic hopping integral multiplied by the magnetic-field-dependent phase factor. In order to calculate the relativistic hopping integral, the relativistic version of the so-called Slater-Koster table is also given in an explicit form. We apply the present method to crystalline silicon immersed in a uniform magnetic field, and reveal its energy-band structure that is defined in the magnetic first Brillouin zone. It is found that the widths of energy-bands increase with increasing the magnetic field, which indicates the magnetic-field dependence of the appropriateness of the effective mass approximation. The recursive energy spectrum, which is the so-called butterfly diagram, can also be seen in the k -space plane perpendicular to the magnetic field.
Anisimova, Maria; Gil, Manuel; Dufayard, Jean-François; Dessimoz, Christophe; Gascuel, Olivier
2011-01-01
Phylogenetic inference and evaluating support for inferred relationships is at the core of many studies testing evolutionary hypotheses. Despite the popularity of nonparametric bootstrap frequencies and Bayesian posterior probabilities, the interpretation of these measures of tree branch support remains a source of discussion. Furthermore, both methods are computationally expensive and become prohibitive for large data sets. Recent fast approximate likelihood-based measures of branch supports (approximate likelihood ratio test [aLRT] and Shimodaira–Hasegawa [SH]-aLRT) provide a compelling alternative to these slower conventional methods, offering not only speed advantages but also excellent levels of accuracy and power. Here we propose an additional method: a Bayesian-like transformation of aLRT (aBayes). Considering both probabilistic and frequentist frameworks, we compare the performance of the three fast likelihood-based methods with the standard bootstrap (SBS), the Bayesian approach, and the recently introduced rapid bootstrap. Our simulations and real data analyses show that with moderate model violations, all tests are sufficiently accurate, but aLRT and aBayes offer the highest statistical power and are very fast. With severe model violations aLRT, aBayes and Bayesian posteriors can produce elevated false-positive rates. With data sets for which such violation can be detected, we recommend using SH-aLRT, the nonparametric version of aLRT based on a procedure similar to the Shimodaira–Hasegawa tree selection. In general, the SBS seems to be excessively conservative and is much slower than our approximate likelihood-based methods. PMID:21540409
Efficient parallel solution of parabolic equations - Implicit methods on the Cedar multicluster
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Y.
1990-01-01
A class of implicit methods for the parallel solution of linear parabolic differential equations based on Pade and Chebyshev rational approximations to the matrix exponential are presented. It is pointed out that this approach incorporates both natural hierarchical parallelism, improved intrinsic efficiency, and fewer timesteps. These advantages lead to an extremely fast family of methods for the solution of certain time-dependent problems. These techniques are illustrated with numerical experiments on the University of Illinois Cedar multicluster architecture. The experiments indicate that implicit methods of very high degree offer great promise for the solution of certain parabolic problems when in computational environment with parallel resources. Hierarchically organized parallel computers, such as the Cedar multicluster, are found to be especially attractive for these schemes.
Giese, Timothy J.; York, Darrin M.
2010-01-01
We extend the Kohn–Sham potential energy expansion (VE) to include variations of the kinetic energy density and use the VE formulation with a 6-31G* basis to perform a “Jacob’s ladder” comparison of small molecule properties using density functionals classified as being either LDA, GGA, or meta-GGA. We show that the VE reproduces standard Kohn–Sham DFT results well if all integrals are performed without further approximation, and there is no substantial improvement in using meta-GGA functionals relative to GGA functionals. The advantages of using GGA versus LDA functionals becomes apparent when modeling hydrogen bonds. We furthermore examine the effect of using integral approximations to compute the zeroth-order energy and first-order matrix elements, and the results suggest that the origin of the short-range repulsive potential within self-consistent charge density-functional tight-binding methods mainly arises from the approximations made to the first-order matrix elements. PMID:21197976
Giese, Timothy J; York, Darrin M
2010-12-28
We extend the Kohn-Sham potential energy expansion (VE) to include variations of the kinetic energy density and use the VE formulation with a 6-31G* basis to perform a "Jacob's ladder" comparison of small molecule properties using density functionals classified as being either LDA, GGA, or meta-GGA. We show that the VE reproduces standard Kohn-Sham DFT results well if all integrals are performed without further approximation, and there is no substantial improvement in using meta-GGA functionals relative to GGA functionals. The advantages of using GGA versus LDA functionals becomes apparent when modeling hydrogen bonds. We furthermore examine the effect of using integral approximations to compute the zeroth-order energy and first-order matrix elements, and the results suggest that the origin of the short-range repulsive potential within self-consistent charge density-functional tight-binding methods mainly arises from the approximations made to the first-order matrix elements.
NASA Technical Reports Server (NTRS)
Stiehl, A. L.; Haberman, R. C.; Cowles, J. H.
1988-01-01
An approximate method to compute the maximum deformation and permanent set of a beam subjected to shock wave laoding in vacuo and in water was investigated. The method equates the maximum kinetic energy of the beam (and water) to the elastic plastic work done by a static uniform load applied to a beam. Results for the water case indicate that the plastic deformation is controlled by the kinetic energy of the water. The simplified approach can result in significant savings in computer time or it can expediently be used as a check of results from a more rigorous approach. The accuracy of the method is demonstrated by various examples of beams with simple support and clamped support boundary conditions.
NASA Astrophysics Data System (ADS)
Assous, Franck; Chaskalovic, Joël
2013-03-01
In this Note, we propose a new methodology based on exploratory data mining techniques to evaluate the errors due to the description of a given real system. First, we decompose this description error into four types of sources. Then, we construct databases of the entire information produced by different numerical approximation methods, to assess and compare the significant differences between these methods, using techniques like decision trees, Kohonen's cards, or neural networks. As an example, we characterize specific states of the real system for which we can locally appreciate the accuracy between two kinds of finite elements methods. In this case, this allowed us to precise the classical Bramble-Hilbert theorem that gives a global error estimate, whereas our approach gives a local error estimate.
Hromadka, T.V.; Guymon, G.L.
1985-01-01
An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to apply to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element method. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.
Regnier, D.; Verriere, M.; Dubray, N.; Schunck, N.
2015-11-30
In this study, we describe the software package FELIX that solves the equations of the time-dependent generator coordinate method (TDGCM) in NN-dimensions (N ≥ 1) under the Gaussian overlap approximation. The numerical resolution is based on the Galerkin finite element discretization of the collective space and the Crank–Nicolson scheme for time integration. The TDGCM solver is implemented entirely in C++. Several additional tools written in C++, Python or bash scripting language are also included for convenience. In this paper, the solver is tested with a series of benchmarks calculations. We also demonstrate the ability of our code to handle a realistic calculation of fission dynamics.
NASA Astrophysics Data System (ADS)
Regnier, D.; Verrière, M.; Dubray, N.; Schunck, N.
2016-03-01
We describe the software package FELIX that solves the equations of the time-dependent generator coordinate method (TDGCM) in N-dimensions (N ≥ 1) under the Gaussian overlap approximation. The numerical resolution is based on the Galerkin finite element discretization of the collective space and the Crank-Nicolson scheme for time integration. The TDGCM solver is implemented entirely in C++. Several additional tools written in C++, Python or bash scripting language are also included for convenience. In this paper, the solver is tested with a series of benchmarks calculations. We also demonstrate the ability of our code to handle a realistic calculation of fission dynamics.
Rossi, Mariana; Liu, Hanchao; Bowman, Joel; Paesani, Francesco; Ceriotti, Michele
2014-11-14
Including quantum mechanical effects on the dynamics of nuclei in the condensed phase is challenging, because the complexity of exact methods grows exponentially with the number of quantum degrees of freedom. Efforts to circumvent these limitations can be traced down to two approaches: methods that treat a small subset of the degrees of freedom with rigorous quantum mechanics, considering the rest of the system as a static or classical environment, and methods that treat the whole system quantum mechanically, but using approximate dynamics. Here, we perform a systematic comparison between these two philosophies for the description of quantum effects in vibrational spectroscopy, taking the Embedded Local Monomer model and a mixed quantum-classical model as representatives of the first family of methods, and centroid molecular dynamics and thermostatted ring polymer molecular dynamics as examples of the latter. We use as benchmarks D{sub 2}O doped with HOD and pure H{sub 2}O at three distinct thermodynamic state points (ice Ih at 150 K, and the liquid at 300 K and 600 K), modeled with the simple q-TIP4P/F potential energy and dipole moment surfaces. With few exceptions the different techniques yield IR absorption frequencies that are consistent with one another within a few tens of cm{sup −1}. Comparison with classical molecular dynamics demonstrates the importance of nuclear quantum effects up to the highest temperature, and a detailed discussion of the discrepancies between the various methods let us draw some (circumstantial) conclusions about the impact of the very different approximations that underlie them. Such cross validation between radically different approaches could indicate a way forward to further improve the state of the art in simulations of condensed-phase quantum dynamics.
Rossi, Mariana; Liu, Hanchao; Paesani, Francesco; Bowman, Joel; Ceriotti, Michele
2014-11-14
Including quantum mechanical effects on the dynamics of nuclei in the condensed phase is challenging, because the complexity of exact methods grows exponentially with the number of quantum degrees of freedom. Efforts to circumvent these limitations can be traced down to two approaches: methods that treat a small subset of the degrees of freedom with rigorous quantum mechanics, considering the rest of the system as a static or classical environment, and methods that treat the whole system quantum mechanically, but using approximate dynamics. Here, we perform a systematic comparison between these two philosophies for the description of quantum effects in vibrational spectroscopy, taking the Embedded Local Monomer model and a mixed quantum-classical model as representatives of the first family of methods, and centroid molecular dynamics and thermostatted ring polymer molecular dynamics as examples of the latter. We use as benchmarks D2O doped with HOD and pure H2O at three distinct thermodynamic state points (ice Ih at 150 K, and the liquid at 300 K and 600 K), modeled with the simple q-TIP4P/F potential energy and dipole moment surfaces. With few exceptions the different techniques yield IR absorption frequencies that are consistent with one another within a few tens of cm(-1). Comparison with classical molecular dynamics demonstrates the importance of nuclear quantum effects up to the highest temperature, and a detailed discussion of the discrepancies between the various methods let us draw some (circumstantial) conclusions about the impact of the very different approximations that underlie them. Such cross validation between radically different approaches could indicate a way forward to further improve the state of the art in simulations of condensed-phase quantum dynamics.
NASA Astrophysics Data System (ADS)
Rossi, Mariana; Liu, Hanchao; Paesani, Francesco; Bowman, Joel; Ceriotti, Michele
2014-11-01
Including quantum mechanical effects on the dynamics of nuclei in the condensed phase is challenging, because the complexity of exact methods grows exponentially with the number of quantum degrees of freedom. Efforts to circumvent these limitations can be traced down to two approaches: methods that treat a small subset of the degrees of freedom with rigorous quantum mechanics, considering the rest of the system as a static or classical environment, and methods that treat the whole system quantum mechanically, but using approximate dynamics. Here, we perform a systematic comparison between these two philosophies for the description of quantum effects in vibrational spectroscopy, taking the Embedded Local Monomer model and a mixed quantum-classical model as representatives of the first family of methods, and centroid molecular dynamics and thermostatted ring polymer molecular dynamics as examples of the latter. We use as benchmarks D2O doped with HOD and pure H2O at three distinct thermodynamic state points (ice Ih at 150 K, and the liquid at 300 K and 600 K), modeled with the simple q-TIP4P/F potential energy and dipole moment surfaces. With few exceptions the different techniques yield IR absorption frequencies that are consistent with one another within a few tens of cm-1. Comparison with classical molecular dynamics demonstrates the importance of nuclear quantum effects up to the highest temperature, and a detailed discussion of the discrepancies between the various methods let us draw some (circumstantial) conclusions about the impact of the very different approximations that underlie them. Such cross validation between radically different approaches could indicate a way forward to further improve the state of the art in simulations of condensed-phase quantum dynamics.
Rossi, Mariana; Liu, Hanchao; Paesani, Francesco; Bowman, Joel; Ceriotti, Michele
2014-11-14
Including quantum mechanical effects on the dynamics of nuclei in the condensed phase is challenging, because the complexity of exact methods grows exponentially with the number of quantum degrees of freedom. Efforts to circumvent these limitations can be traced down to two approaches: methods that treat a small subset of the degrees of freedom with rigorous quantum mechanics, considering the rest of the system as a static or classical environment, and methods that treat the whole system quantum mechanically, but using approximate dynamics. Here, we perform a systematic comparison between these two philosophies for the description of quantum effects in vibrational spectroscopy, taking the Embedded Local Monomer model and a mixed quantum-classical model as representatives of the first family of methods, and centroid molecular dynamics and thermostatted ring polymer molecular dynamics as examples of the latter. We use as benchmarks D2O doped with HOD and pure H2O at three distinct thermodynamic state points (ice Ih at 150 K, and the liquid at 300 K and 600 K), modeled with the simple q-TIP4P/F potential energy and dipole moment surfaces. With few exceptions the different techniques yield IR absorption frequencies that are consistent with one another within a few tens of cm(-1). Comparison with classical molecular dynamics demonstrates the importance of nuclear quantum effects up to the highest temperature, and a detailed discussion of the discrepancies between the various methods let us draw some (circumstantial) conclusions about the impact of the very different approximations that underlie them. Such cross validation between radically different approaches could indicate a way forward to further improve the state of the art in simulations of condensed-phase quantum dynamics. PMID:25399122
NASA Astrophysics Data System (ADS)
Yeckel, Andrew; Lun, Lisa; Derby, Jeffrey J.
2009-12-01
A new, approximate block Newton (ABN) method is derived and tested for the coupled solution of nonlinear models, each of which is treated as a modular, black box. Such an approach is motivated by a desire to maintain software flexibility without sacrificing solution efficiency or robustness. Though block Newton methods of similar type have been proposed and studied, we present a unique derivation and use it to sort out some of the more confusing points in the literature. In particular, we show that our ABN method behaves like a Newton iteration preconditioned by an inexact Newton solver derived from subproblem Jacobians. The method is demonstrated on several conjugate heat transfer problems modeled after melt crystal growth processes. These problems are represented by partitioned spatial regions, each modeled by independent heat transfer codes and linked by temperature and flux matching conditions at the boundaries common to the partitions. Whereas a typical block Gauss-Seidel iteration fails about half the time for the model problem, quadratic convergence is achieved by the ABN method under all conditions studied here. Additional performance advantages over existing methods are demonstrated and discussed.
NASA Astrophysics Data System (ADS)
Ibáñez, Javier; Hernández, Vicente
2011-03-01
Differential Matrix Riccati Equations (DMREs) appear in several branches of science such as applied physics and engineering. For example, these equations play a fundamental role in control theory, optimal control, filtering and estimation, decoupling and order reduction, etc. In this paper a new method based on a theorem proved in this paper is described for solving DMREs by a piecewise-linearized approach. This method is applied for developing two block-oriented algorithms based on diagonal Padé approximants. MATLAB versions of the above algorithms are developed, comparing, under equal conditions, accuracy and computational costs with other piecewise-linearized algorithms implemented by the authors. Experimental results show the advantages of solving stiff or non-stiff DMREs by the implemented algorithms.
Maltais Lapointe, Genevieve; Lynnerup, Niels; Hoppa, Robert D
2016-01-01
The most common method to predict nasal projection for forensic facial approximation is Gerasimov's two-tangent method. Ullrich H, Stephan CN (J Forensic Sci, 2011; 56: 470) argued that the method has not being properly implemented and a revised interpretation was proposed. The aim of this study was to compare the accuracy of both versions using a sample of 66 postmortem cranial CT data. The true nasal tip was defined using pronasale and nasal spine line, as it was not originally specified by Gerasimov. The original guidelines were found to be highly inaccurate with the position of the nasal tip being overestimated by c. 2 cm. Despite the revised interpretation consistently resulting in smaller distance from true nasal tip, the method was not statistically accurate (p > 0.05) in positioning the tip of the nose (absolute distance >5 mm). These results support that Gerasimov's method was not properly performed, and Ullrich H, Stephan CN (J Forensic Sci, 2011; 56: 470) interpretation should be used instead.
NASA Technical Reports Server (NTRS)
Chaudhuri, Reaz A.; Seide, Paul
1987-01-01
An approximate semianalytical method for determination of interlaminar shear stress distribution through the thickness of an arbitrarily laminated thick plate has been presented. The method is based on the assumptions of transverse inextensibility and layerwise constant shear angle theory (LCST) and utilizes an assumed quadratic displacement potential energy based finite element method (FEM). Centroid of the triangular surface has been proved from a rigorous mathematical point of view (Aubin-Nitsche theory), to be the point of exceptional accuracy for the interlaminar shear stresses. Numerical results indicate close agreement with the available three-dimensional elasticity theory solutions. A comparison between the present theory and that due to an assumed stress hybrid FEM suggest that the (normal) traction-free-edge condition is not satisfied in the latter approach. Furthermore, the present paper is the first to present the results for interlaminar shear stresses in a two-layer thick square plate of balanced unsymmetric angle-ply construction. A comparison with the recently proposed Equilibrium Method (EM) indicates the superiority of the present method, because the latter assures faster convergence as well as simultaneous vanishing of the transverse shear stresses on both of the exposed surfaces of the laminate. Superiority of the present method over the EM, in the case of a symmetric laminate, is limited to faster convergence alone. It has also been demonstrated that the combination of the present method and the reduced (quadratic order) numerical integration scheme yields convergence of the interlaminar shear stresses almost as rapidly as that of the nodal displacements, in the case of a thin plate.
On the convergence of local approximations to pseudodifferential operators with applications
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1994-01-01
We consider the approximation of a class pseudodifferential operators by sequences of operators which can be expressed as compositions of differential operators and their inverses. We show that the error in such approximations can be bounded in terms of L(1) error in approximating a convolution kernel, and use this fact to develop convergence results. Our main result is a finite time convergence analysis of the Engquist-Majda Pade approximants to the square root of the d'Alembertian. We also show that no spatially local approximation to this operator can be convergent uniformly in time. We propose some temporally local but spatially nonlocal operators with better long time behavior. These are based on Laguerre and exponential series.
NASA Astrophysics Data System (ADS)
Deta, U. A.; Suparmi, Cari
2013-09-01
The approximate analytical solution of Schrodinger equation in D-Dimensions for Scarf trigonometry potential were investigated using Nikiforov-Uvarov method. The bound state energy are given in the close form and the corresponding wave function for arbitary l-state in D-dimensions are formulated in the form of generalized Jacobi Polynomials. The example of bound state energy and wave function in 3, 4, and 5 dimensions presented in condition of ground state to second excited state. The existence of arbitrary dimensions increase bound state energy and the amplitude of the wave function of this potential. The effect of the presence of Scarf trigonometry potential increase the energy spectrum of this potential.
NASA Astrophysics Data System (ADS)
Thompson, C. P.; Lezeau, P.
1998-11-01
In recent years multigrid algorithms have been applied to increasingly difficult systems of partial differential equations and major improvements in both speed of convergence and robustness have been achieved. Problems involving several interacting fluids are of great interest in many industrial applications, especially in the process and petro-chemical sectors. However, the multifluid version of the Navier-Stokes equations is extremely complex and represents a challenge to advanced numerical algorithms. In this paper, we describe an extension of the full approximation storage (FAS) multigrid algorithm to the multifluid equations. A number of special issues had to be addressed. The first was the development of a customised, non-linear, coupled relaxation scheme for the smoothing step. Automatic differentiation was used to facilitate the coding of a robust, globally convergent quasi-Newton method. It was also necessary to use special inter-grid transfer operators to maintain the realisability of the solution. Algorithmic details are given and solutions for a series of test problems are compared with those from a widely validated, commercial code. The new approach has proved to be robust; it achieves convergence without resorting to specialised initialisation methods. Moreover, even though the rate of convergence is complex, the method has achieved very good reduction factors: typically five orders of magnitude in 50 cycles.
Ball, R D
2001-01-01
We describe an approximate method for the analysis of quantitative trait loci (QTL) based on model selection from multiple regression models with trait values regressed on marker genotypes, using a modification of the easily calculated Bayesian information criterion to estimate the posterior probability of models with various subsets of markers as variables. The BIC-delta criterion, with the parameter delta increasing the penalty for additional variables in a model, is further modified to incorporate prior information, and missing values are handled by multiple imputation. Marginal probabilities for model sizes are calculated, and the posterior probability of nonzero model size is interpreted as the posterior probability of existence of a QTL linked to one or more markers. The method is demonstrated on analysis of associations between wood density and markers on two linkage groups in Pinus radiata. Selection bias, which is the bias that results from using the same data to both select the variables in a model and estimate the coefficients, is shown to be a problem for commonly used non-Bayesian methods for QTL mapping, which do not average over alternative possible models that are consistent with the data. PMID:11729175
NASA Astrophysics Data System (ADS)
Yi, Longtao; Sun, Tianxi; Wang, Kai; Qin, Min; Yang, Kui; Wang, Jinbang; Liu, Zhiguo
2016-08-01
Confocal three-dimensional micro X-ray fluorescence (3D MXRF) is an excellent surface analysis technology. For a confocal structure, only the X-rays from the confocal volume can be detected. Confocal 3D MXRF has been widely used for analysing elements, the distribution of elements and 3D image of some special samples. However, it has rarely been applied to analysing surface topography by surface scanning. In this paper, a confocal 3D MXRF technology based on polycapillary X-ray optics was proposed for determining surface topography. A corresponding surface adaptive algorithm based on a progressive approximation method was designed to obtain surface topography. The surface topography of the letter "R" on a coin of the People's Republic of China and a small pit on painted pottery were obtained. The surface topography of the "R" and the pit are clearly shown in the two figures. Compared with the method in our previous study, it exhibits a higher scanning efficiency. This approach could be used for two-dimensional (2D) elemental mapping or 3D elemental voxel mapping measurements as an auxiliary method. It also could be used for analysing elemental mapping while obtaining the surface topography of a sample in 2D elemental mapping measurement.
Higher-order numerical methods derived from three-point polynomial interpolation
NASA Technical Reports Server (NTRS)
Rubin, S. G.; Khosla, P. K.
1976-01-01
Higher-order collocation procedures resulting in tridiagonal matrix systems are derived from polynomial spline interpolation and Hermitian finite-difference discretization. The equations generally apply for both uniform and variable meshes. Hybrid schemes resulting from different polynomial approximations for first and second derivatives lead to the nonuniform mesh extension of the so-called compact or Pade difference techniques. A variety of fourth-order methods are described and this concept is extended to sixth-order. Solutions with these procedures are presented for the similar and non-similar boundary layer equations with and without mass transfer, the Burgers equation, and the incompressible viscous flow in a driven cavity. Finally, the interpolation procedure is used to derive higher-order temporal integration schemes and results are shown for the diffusion equation.
NASA Technical Reports Server (NTRS)
Mair, R. W.; Sen, P. N.; Hurlimann, M. D.; Patz, S.; Cory, D. G.; Walsworth, R. L.
2002-01-01
We report a systematic study of xenon gas diffusion NMR in simple model porous media, random packs of mono-sized glass beads, and focus on three specific areas peculiar to gas-phase diffusion. These topics are: (i) diffusion of spins on the order of the pore dimensions during the application of the diffusion encoding gradient pulses in a PGSE experiment (breakdown of the narrow pulse approximation and imperfect background gradient cancellation), (ii) the ability to derive long length scale structural information, and (iii) effects of finite sample size. We find that the time-dependent diffusion coefficient, D(t), of the imbibed xenon gas at short diffusion times in small beads is significantly affected by the gas pressure. In particular, as expected, we find smaller deviations between measured D(t) and theoretical predictions as the gas pressure is increased, resulting from reduced diffusion during the application of the gradient pulse. The deviations are then completely removed when water D(t) is observed in the same samples. The use of gas also allows us to probe D(t) over a wide range of length scales and observe the long time asymptotic limit which is proportional to the inverse tortuosity of the sample, as well as the diffusion distance where this limit takes effect (approximately 1-1.5 bead diameters). The Pade approximation can be used as a reference for expected xenon D(t) data between the short and the long time limits, allowing us to explore deviations from the expected behavior at intermediate times as a result of finite sample size effects. Finally, the application of the Pade interpolation between the long and the short time asymptotic limits yields a fitted length scale (the Pade length), which is found to be approximately 0.13b for all bead packs, where b is the bead diameter. c. 2002 Elsevier Sciences (USA).
Meromorphic approximants to complex Cauchy transforms with polar singularities
Baratchart, Laurent; Yattselev, Maxim L
2009-10-31
We study AAK-type meromorphic approximants to functions of the form F(z)={integral}(d{lambda}(t))/(z-t)+R(z), where R is a rational function and {lambda} is a complex measure with compact regular support included in (-1,1), whose argument has bounded variation on the support. The approximation is understood in the L{sup p}-norm of the unit circle, p{>=}2. We dwell on the fact that the denominators of such approximants satisfy certain non-Hermitian orthogonal relations with varying weights. They resemble the orthogonality relations that arise in the study of multipoint Pade approximants. However, the varying part of the weight implicitly depends on the orthogonal polynomials themselves, which constitutes the main novelty and the main difficulty of the undertaken analysis. We obtain that the counting measures of poles of the approximants converge to the Green equilibrium distribution on the support of {lambda} relative to the unit disc, that the approximants themselves converge in capacity to F, and that the poles of R attract at least as many poles of the approximants as their multiplicity and not much more. Bibliography: 35 titles.
Frozen Gaussian approximation-based two-level methods for multi-frequency Schrödinger equation
NASA Astrophysics Data System (ADS)
Lorin, E.; Yang, X.
2016-10-01
In this paper, we develop two-level numerical methods for the time-dependent Schrödinger equation (TDSE) in multi-frequency regime. This work is motivated by attosecond science (Corkum and Krausz, 2007), which refers to the interaction of short and intense laser pulses with quantum particles generating wide frequency spectrum light, and allowing for the coherent emission of attosecond pulses (1 attosecond=10-18 s). The principle of the proposed methods consists in decomposing a wavefunction into a low/moderate frequency (quantum) contribution, and a high frequency contribution exhibiting a semi-classical behavior. Low/moderate frequencies are computed through the direct solution to the quantum TDSE on a coarse mesh, and the high frequency contribution is computed by frozen Gaussian approximation (Herman and Kluk, 1984). This paper is devoted to the derivation of consistent, accurate and efficient algorithms performing such a decomposition and the time evolution of the wavefunction in the multi-frequency regime. Numerical simulations are provided to illustrate the accuracy and efficiency of the derived algorithms.
NASA Astrophysics Data System (ADS)
Dalmasse, Kevin; Nychka, Douglas; Gibson, Sarah; Flyer, Natasha; Fan, Yuhong
2016-07-01
The Coronal Multichannel Polarimeter (CoMP) routinely performs coronal polarimetric measurements using the Fe XIII 10747 Å and 10798 Å lines, which are sensitive to the coronal magnetic field. However, inverting such polarimetric measurements into magnetic field data is a difficult task because the corona is optically thin at these wavelengths and the observed signal is therefore the integrated emission of all the plasma along the line of sight. To overcome this difficulty, we take on a new approach that combines a parameterized 3D magnetic field model with forward modeling of the polarization signal. For that purpose, we develop a new, fast and efficient, optimization method for model-data fitting: the Radial-basis-functions Optimization Approximation Method (ROAM). Model-data fitting is achieved by optimizing a user-specified log-likelihood function that quantifies the differences between the observed polarization signal and its synthetic/predicted analogue. Speed and efficiency are obtained by combining sparse evaluation of the magnetic model with radial-basis-function (RBF) decomposition of the log-likelihood function. The RBF decomposition provides an analytical expression for the log-likelihood function that is used to inexpensively estimate the set of parameter values optimizing it. We test and validate ROAM on a synthetic test bed of a coronal magnetic flux rope and show that it performs well with a significantly sparse sample of the parameter space. We conclude that our optimization method is well-suited for fast and efficient model-data fitting and can be exploited for converting coronal polarimetric measurements, such as the ones provided by CoMP, into coronal magnetic field data.
NASA Astrophysics Data System (ADS)
Bozkaya, Uǧur; Sherrill, C. David
2016-05-01
An efficient implementation is presented for analytic gradients of the coupled-cluster singles and doubles (CCSD) method with the density-fitting approximation, denoted DF-CCSD. Frozen core terms are also included. When applied to a set of alkanes, the DF-CCSD analytic gradients are significantly accelerated compared to conventional CCSD for larger molecules. The efficiency of our DF-CCSD algorithm arises from the acceleration of several different terms, which are designated as the "gradient terms": computation of particle density matrices (PDMs), generalized Fock-matrix (GFM), solution of the Z-vector equation, formation of the relaxed PDMs and GFM, back-transformation of PDMs and GFM to the atomic orbital (AO) basis, and evaluation of gradients in the AO basis. For the largest member of the alkane set (C10H22), the computational times for the gradient terms (with the cc-pVTZ basis set) are 2582.6 (CCSD) and 310.7 (DF-CCSD) min, respectively, a speed up of more than 8-folds. For gradient related terms, the DF approach avoids the usage of four-index electron repulsion integrals. Based on our previous study [U. Bozkaya, J. Chem. Phys. 141, 124108 (2014)], our formalism completely avoids construction or storage of the 4-index two-particle density matrix (TPDM), using instead 2- and 3-index TPDMs. The DF approach introduces negligible errors for equilibrium bond lengths and harmonic vibrational frequencies.
Amendola, Vincenzo
2016-01-21
The integration of silver and gold nanoparticles with graphene is frequently sought for the realization of hybrid materials with superior optical, photoelectric and photocatalytic performances. A crucial aspect for these applications is how the surface plasmon resonance of metal nanoparticles is modified after assembly with graphene. Here, we used the discrete dipole approximation method to study the surface plasmon resonance of silver and gold nanoparticles in the proximity of a graphene flake or embedded in graphene structures. Surface plasmon resonance modifications were investigated for various shapes of metal nanoparticles and for different morphologies of the nanoparticle-graphene nanohybrids, in a step-by-step approach. Calculations show that the surface plasmon resonance of Ag nanoparticles is quenched in nanohybrids, whereas either surface plasmon quenching or enhancement can be obtained with Au nanoparticles, depending on the configuration adopted. However, graphene effects on the surface plasmon resonance are rapidly lost already at a distance of the order of 5 nm. These results provide useful indications for characterization and monitoring the synthesis of hybrid nanostructures, as well as for the development of hybrid metal nanoparticle/graphene nanomaterials with desired optical properties.
Filobello-Nino, Uriel; Vazquez-Leal, Hector; Cervantes-Perez, Juan; Benhammouda, Brahim; Perez-Sesma, Agustin; Hernandez-Martinez, Luis; Jimenez-Fernandez, Victor Manuel; Herrera-May, Agustin Leobardo; Pereyra-Diaz, Domitilo; Marin-Hernandez, Antonio; Huerta Chua, Jesus
2014-01-01
This article proposes Laplace Transform Homotopy Perturbation Method (LT-HPM) to find an approximate solution for the problem of an axisymmetric Newtonian fluid squeezed between two large parallel plates. After comparing figures between approximate and exact solutions, we will see that the proposed solutions besides of handy, are highly accurate and therefore LT-HPM is extremely efficient.
Filobello-Nino, Uriel; Vazquez-Leal, Hector; Cervantes-Perez, Juan; Benhammouda, Brahim; Perez-Sesma, Agustin; Hernandez-Martinez, Luis; Jimenez-Fernandez, Victor Manuel; Herrera-May, Agustin Leobardo; Pereyra-Diaz, Domitilo; Marin-Hernandez, Antonio; Huerta Chua, Jesus
2014-01-01
This article proposes Laplace Transform Homotopy Perturbation Method (LT-HPM) to find an approximate solution for the problem of an axisymmetric Newtonian fluid squeezed between two large parallel plates. After comparing figures between approximate and exact solutions, we will see that the proposed solutions besides of handy, are highly accurate and therefore LT-HPM is extremely efficient. PMID:25157331
NASA Astrophysics Data System (ADS)
Betcke, Marta M.; Lionheart, William R. B.
2013-11-01
The mechanical motion of the gantry in conventional cone beam CT scanners restricts the speed of data acquisition in applications with near real time requirements. A possible resolution of this problem is to replace the moving source detector assembly with static parts that are electronically activated. An example of such a system is the Rapiscan Systems RTT80 real time tomography scanner, with a static ring of sources and axially offset static cylinder of detectors. A consequence of such a design is asymmetrical axial truncation of the cone beam projections resulting, in the sense of integral geometry, in severely incomplete data. In particular we collect data only in a fraction of the Tam-Danielsson window, hence the standard cone beam reconstruction techniques do not apply. In this work we propose a family of multi-sheet surface rebinning methods for reconstruction from such truncated projections. The proposed methods combine analytical and numerical ideas utilizing linearity of the ray transform to reconstruct data on multi-sheet surfaces, from which the volumetric image is obtained through deconvolution. In this first paper in the series, we discuss the rebinning to multi-sheet surfaces. In particular we concentrate on the underlying transforms on multi-sheet surfaces and their approximation with data collected by offset multi-source scanning geometries like the RTT. The optimal multi-sheet surface and the corresponding rebinning function are found as a solution of a variational problem. In the case of the quadratic objective, the variational problem for the optimal rebinning pair can be solved by a globally convergent iteration. Examples of optimal rebinning pairs are computed for different trajectories. We formulate the axial deconvolution problem for the recovery of the volumetric image from the reconstructions on multi-sheet surfaces. Efficient and stable solution of the deconvolution problem is the subject of the second paper in this series (Betcke and
NASA Astrophysics Data System (ADS)
Yin, George; Wang, Le Yi; Zhang, Hongwei
2014-12-01
Stochastic approximation methods have found extensive and diversified applications. Recent emergence of networked systems and cyber-physical systems has generated renewed interest in advancing stochastic approximation into a general framework to support algorithm development for information processing and decisions in such systems. This paper presents a survey on some recent developments in stochastic approximation methods and their applications. Using connected vehicles in platoon formation and coordination as a platform, we highlight some traditional and new methodologies of stochastic approximation algorithms and explain how they can be used to capture essential features in networked systems. Distinct features of networked systems with randomly switching topologies, dynamically evolving parameters, and unknown delays are presented, and control strategies are provided.
Yin, George; Wang, Le Yi; Zhang, Hongwei
2014-12-10
Stochastic approximation methods have found extensive and diversified applications. Recent emergence of networked systems and cyber-physical systems has generated renewed interest in advancing stochastic approximation into a general framework to support algorithm development for information processing and decisions in such systems. This paper presents a survey on some recent developments in stochastic approximation methods and their applications. Using connected vehicles in platoon formation and coordination as a platform, we highlight some traditional and new methodologies of stochastic approximation algorithms and explain how they can be used to capture essential features in networked systems. Distinct features of networked systems with randomly switching topologies, dynamically evolving parameters, and unknown delays are presented, and control strategies are provided.
NASA Astrophysics Data System (ADS)
Sarwar, S.; Rashidi, M. M.
2016-07-01
This paper deals with the investigation of the analytical approximate solutions for two-term fractional-order diffusion, wave-diffusion, and telegraph equations. The fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals [0,1], (1,2), and [1,2], respectively. In this paper, we extended optimal homotopy asymptotic method (OHAM) for two-term fractional-order wave-diffusion equations. Highly approximate solution is obtained in series form using this extended method. Approximate solution obtained by OHAM is compared with the exact solution. It is observed that OHAM is a prevailing and convergent method for the solutions of nonlinear-fractional-order time-dependent partial differential problems. The numerical results rendering that the applied method is explicit, effective, and easy to use, for handling more general fractional-order wave diffusion, diffusion, and telegraph problems.
Rogers, J.; Porter, K.
2012-03-01
This paper updates previous work that describes time period-based and other approximation methods for estimating the capacity value of wind power and extends it to include solar power. The paper summarizes various methods presented in utility integrated resource plans, regional transmission organization methodologies, regional stakeholder initiatives, regulatory proceedings, and academic and industry studies. Time period-based approximation methods typically measure the contribution of a wind or solar plant at the time of system peak - sometimes over a period of months or the average of multiple years.
NASA Astrophysics Data System (ADS)
Espinoza-Ojeda, O. M.; Santoyo, E.; Andaverde, J.
2011-06-01
Approximate and rigorous solutions of seven heat transfer models were statistically examined, for the first time, to estimate stabilized formation temperatures (SFT) of geothermal and petroleum boreholes. Constant linear and cylindrical heat source models were used to describe the heat flow (either conductive or conductive/convective) involved during a borehole drilling. A comprehensive statistical assessment of the major error sources associated with the use of these models was carried out. The mathematical methods (based on approximate and rigorous solutions of heat transfer models) were thoroughly examined by using four statistical analyses: (i) the use of linear and quadratic regression models to infer the SFT; (ii) the application of statistical tests of linearity to evaluate the actual relationship between bottom-hole temperatures and time function data for each selected method; (iii) the comparative analysis of SFT estimates between the approximate and rigorous predictions of each analytical method using a β ratio parameter to evaluate the similarity of both solutions, and (iv) the evaluation of accuracy in each method using statistical tests of significance, and deviation percentages between 'true' formation temperatures and SFT estimates (predicted from approximate and rigorous solutions). The present study also enabled us to determine the sensitivity parameters that should be considered for a reliable calculation of SFT, as well as to define the main physical and mathematical constraints where the approximate and rigorous methods could provide consistent SFT estimates.
NASA Technical Reports Server (NTRS)
Barnwell, R. W.; Davis, R. M.
1975-01-01
A user's manual is presented for a computer program which calculates inviscid flow about lifting configurations in the free-stream Mach-number range from zero to low supersonic. Angles of attack of the order of the configuration thickness-length ratio and less can be calculated. An approximate formulation was used which accounts for shock waves, leading-edge separation and wind-tunnel wall effects.
NASA Astrophysics Data System (ADS)
Shvartsman, Ilya A.
2007-02-01
Traditional proofs of the Pontryagin Maximum Principle (PMP) require the continuous differentiability of the dynamics with respect to the state variable on a neighborhood of the minimizing state trajectory, when arbitrary values of control variable are inserted into the dynamic equations. Sussmann has drawn attention to the fact that the PMP remains valid when the dynamics are differentiable with respect to the state variable, merely when the minimizing control is inserted into the dynamic equations. This weakening of earlier hypotheses has been referred to as the Lojasiewicz refinement. Arutyunov and Vinter showed that these extensions of early versions of the PMP can be simply proved by finite-dimensional approximations, application of a Lagrange multiplier rule in finite dimensions and passage to the limit. This paper generalizes the finite-dimensional approximation technique to a problem with state constraints, where the use of needle variations of the optimal control had not been successful. Moreover, the cost function and endpoint constraints are not assumed to be differentiable, but merely locally Lipschitz continuous. The Maximum Principle is expressed in terms of Michel-Penot subdifferential.
NASA Astrophysics Data System (ADS)
Hamal, Dipendra Bahadur; Higuchi, Masahiko; Higuchi, Katsuhiko
2015-06-01
The magnetic-field-containing relativistic tight-binding approximation (MFRTB) method [Phys. Rev. B 91, 075122 (2015), 10.1103/PhysRevB.91.075122] is the first-principles calculation method for electronic structures of materials immersed in the magnetic field. In this paper, the MFRTB method is applied to the simple cubic lattice immersed in the magnetic field. The total energy and magnetization oscillate with the inverse of the magnitude of the magnetic field, which means that the de Haas-van Alphen oscillation is revisited directly through the MFRTB method. It is shown that the conventional Lifshitz-Kosevich (LK) formula is a good approximation to the results of the MFRTB method in the experimentally available magnetic field. Furthermore, the additional oscillation peaks of the magnetization are found especially in the high magnetic field, which cannot be explained by the LK formula.
NASA Technical Reports Server (NTRS)
Jones, Alun R.
1940-01-01
This report has been prepare in response to a request for information from an aircraft company. A typical example was selected for the presentation of an approximate method of calculation of the relative humidity required to prevent frosting on the inside of a plastic window in a pressure type cabin on a high speed airplane. The results of the study are reviewed.
Daly, Aidan C.; Holmes, Chris
2015-01-01
As cardiac cell models become increasingly complex, a correspondingly complex ‘genealogy’ of inherited parameter values has also emerged. The result has been the loss of a direct link between model parameters and experimental data, limiting both reproducibility and the ability to re-fit to new data. We examine the ability of approximate Bayesian computation (ABC) to infer parameter distributions in the seminal action potential model of Hodgkin and Huxley, for which an immediate and documented connection to experimental results exists. The ability of ABC to produce tight posteriors around the reported values for the gating rates of sodium and potassium ion channels validates the precision of this early work, while the highly variable posteriors around certain voltage dependency parameters suggests that voltage clamp experiments alone are insufficient to constrain the full model. Despite this, Hodgkin and Huxley's estimates are shown to be competitive with those produced by ABC, and the variable behaviour of posterior parametrized models under complex voltage protocols suggests that with additional data the model could be fully constrained. This work will provide the starting point for a full identifiability analysis of commonly used cardiac models, as well as a template for informative, data-driven parametrization of newly proposed models. PMID:27019736
NASA Astrophysics Data System (ADS)
Kopka, Piotr; Wawrzynczak, Anna; Borysiewicz, Mieczyslaw
2016-11-01
In this paper the Bayesian methodology, known as Approximate Bayesian Computation (ABC), is applied to the problem of the atmospheric contamination source identification. The algorithm input data are on-line arriving concentrations of the released substance registered by the distributed sensors network. This paper presents the Sequential ABC algorithm in detail and tests its efficiency in estimation of probabilistic distributions of atmospheric release parameters of a mobile contamination source. The developed algorithms are tested using the data from Over-Land Atmospheric Diffusion (OLAD) field tracer experiment. The paper demonstrates estimation of seven parameters characterizing the contamination source, i.e.: contamination source starting position (x,y), the direction of the motion of the source (d), its velocity (v), release rate (q), start time of release (ts) and its duration (td). The online-arriving new concentrations dynamically update the probability distributions of search parameters. The atmospheric dispersion Second-order Closure Integrated PUFF (SCIPUFF) Model is used as the forward model to predict the concentrations at the sensors locations.
de Stadler, M; Chand, K
2007-11-12
Gas centrifuges exhibit very complex flows. Within the centrifuge there is a rarefied region, a transition region, and a region with an extreme density gradient. The flow moves at hypersonic speeds and shock waves are present. However, the flow is subsonic in the axisymmetric plane. The analysis may be simplified by treating the flow as a perturbation of wheel flow. Wheel flow implies that the fluid is moving as a solid body. With the very large pressure gradient, the majority of the fluid is located very close to the rotor wall and moves at an azimuthal velocity proportional to its distance from the rotor wall; there is no slipping in the azimuthal plane. The fluid can be modeled as incompressible and subsonic in the axisymmetric plane. By treating the centrifuge as long, end effects can be appropriately modeled without performing a detailed boundary layer analysis. Onsager's pancake approximation is used to construct a simulation to model fluid flow in a gas centrifuge. The governing 6th order partial differential equation is broken down into an equivalent coupled system of three equations and then solved numerically. In addition to a discussion on the baseline solution, known problems and future work possibilities are presented.
CranSLIK v1.0: stochastic prediction of oil spill transport and fate using approximation methods
NASA Astrophysics Data System (ADS)
Snow, B. J.; Moulitsas, I.; Kolios, A. J.; De Dominicis, M.
2013-12-01
This paper investigates the development of a model, called CranSLIK, to predict the transport and transformations of a point mass oil spill via a stochastic approach. Initially the various effects that affect the destination are considered and key parameters are chosen which are expected to dominate the displacement. The variables considered are: wind velocity, surface water velocity, spill size, and spill age. For a point mass oil spill, it is found that the centre of mass can be determined by the wind and current data only, and the spill size and age can then be used to reconstruct the surface of the spill. These variables are sampled and simulations are performed using an open-source Lagrangian approach-based code, MEDSLIK II. Regression modelling is applied to create two sets of polynomials: one for the centre of mass, and one for the spill size. A minimum of approximately 80% of the oil is captured for the Algeria scenario. Finally, Monte-Carlo simulation is implemented to allow for consideration of most likely destination for the oil spill, when the distributions for the oceanographic conditions are known.
CranSLIK v1.0: stochastic prediction of oil spill transport and fate using approximation methods
NASA Astrophysics Data System (ADS)
Snow, B. J.; Moulitsas, I.; Kolios, A. J.; De Dominicis, M.
2014-07-01
This paper investigates the development of a model, called CranSLIK, to predict the transport and transformations of a point mass oil spill via a stochastic approach. Initially the various effects on destination are considered and key parameters are chosen which are expected to dominate the displacement. The variables considered are: wind velocity, surface water velocity, spill size, and spill age. For a point mass oil spill, it is found that the centre of mass can be determined by the wind and current data only, and the spill size and age can then be used to reconstruct the surface of the spill. These variables are sampled and simulations are performed using an open-source Lagrangian approach-based code, MEDSLIK II. Regression modelling is applied to create two sets of polynomials: one for the centre of mass, and one for the spill size. Simulations performed for a real oil spill case show that a minimum of approximately 80% of the oil is captured by CranSLIK. Finally, Monte Carlo simulation is implemented to allow for consideration of the most likely destination for the oil spill, when the distributions for the oceanographic conditions are known.
Daly, Aidan C; Gavaghan, David J; Holmes, Chris; Cooper, Jonathan
2015-12-01
As cardiac cell models become increasingly complex, a correspondingly complex 'genealogy' of inherited parameter values has also emerged. The result has been the loss of a direct link between model parameters and experimental data, limiting both reproducibility and the ability to re-fit to new data. We examine the ability of approximate Bayesian computation (ABC) to infer parameter distributions in the seminal action potential model of Hodgkin and Huxley, for which an immediate and documented connection to experimental results exists. The ability of ABC to produce tight posteriors around the reported values for the gating rates of sodium and potassium ion channels validates the precision of this early work, while the highly variable posteriors around certain voltage dependency parameters suggests that voltage clamp experiments alone are insufficient to constrain the full model. Despite this, Hodgkin and Huxley's estimates are shown to be competitive with those produced by ABC, and the variable behaviour of posterior parametrized models under complex voltage protocols suggests that with additional data the model could be fully constrained. This work will provide the starting point for a full identifiability analysis of commonly used cardiac models, as well as a template for informative, data-driven parametrization of newly proposed models. PMID:27019736
NASA Astrophysics Data System (ADS)
Shargatov, V. A.; Gubin, S. A.; Okunev, D. Yu
2016-09-01
We develop a method for calculating the changes in composition of the explosion products in the case where the complete chemical equilibrium is absent but the bimolecular reactions are in quasi-equilibrium with the exception bimolecular reactions with one of the components of the mixture. We investigate the possibility of using the method of "quasiequilibrium" for mixtures of hydrocarbons and oxygen. The method is based on the assumption of the existence of the partial chemical equilibrium in the explosion products. Without significant loss of accuracy to the solution of stiff differential equations detailed kinetic mechanism can be replaced by one or two differential equation and a system of algebraic equations. This method is always consistent with the detailed mechanism and can be used separately or in conjunction with the solution of a stiff system for chemically non-equilibrium mixtures replacing it when bimolecular reactions are near to equilibrium.
Approximating Integrals Using Probability
ERIC Educational Resources Information Center
Maruszewski, Richard F., Jr.; Caudle, Kyle A.
2005-01-01
As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…
NASA Astrophysics Data System (ADS)
Ahlkrona, Josefin; Lötstedt, Per; Kirchner, Nina; Zwinger, Thomas
2016-03-01
We propose and implement a new method, called the Ice Sheet Coupled Approximation Levels (ISCAL) method, for simulation of ice sheet flow in large domains during long time-intervals. The method couples the full Stokes (FS) equations with the Shallow Ice Approximation (SIA). The part of the domain where SIA is applied is determined automatically and dynamically based on estimates of the modeling error. For a three dimensional model problem, ISCAL computes the solution substantially faster with a low reduction in accuracy compared to a monolithic FS. Furthermore, ISCAL is shown to be able to detect rapid dynamic changes in the flow. Three different error estimations are applied and compared. Finally, ISCAL is applied to the Greenland Ice Sheet on a quasi-uniform grid, proving ISCAL to be a potential valuable tool for the ice sheet modeling community.
NASA Technical Reports Server (NTRS)
Jordon, D. E.; Patterson, W.; Sandlin, D. R.
1985-01-01
The XV-15 Tilt Rotor Research Aircraft download phenomenon was analyzed. This phenomenon is a direct result of the two rotor wakes impinging on the wing upper surface when the aircraft is in the hover configuration. For this study the analysis proceeded along tow lines. First was a method whereby results from actual hover tests of the XV-15 aircraft were combined with drag coefficient results from wind tunnel tests of a wing that was representative of the aircraft wing. Second, an analytical method was used that modeled that airflow caused gy the two rotors. Formulas were developed in such a way that acomputer program could be used to calculate the axial velocities were then used in conjunction with the aforementioned wind tunnel drag coefficinet results to produce download values. An attempt was made to validate the analytical results by modeling a model rotor system for which direct download values were determinrd..
Karagiannis, Georgios; Lin, Guang
2014-02-15
Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.
Karagiannis, Georgios Lin, Guang
2014-02-15
Generalized polynomial chaos (gPC) expansions allow us to represent the solution of a stochastic system using a series of polynomial chaos basis functions. The number of gPC terms increases dramatically as the dimension of the random input variables increases. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs when the corresponding deterministic solver is computationally expensive, evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solutions, in both spatial and random domains, by coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spatial points, via (1) the Bayesian model average (BMA) or (2) the median probability model, and their construction as spatial functions on the spatial domain via spline interpolation. The former accounts for the model uncertainty and provides Bayes-optimal predictions; while the latter provides a sparse representation of the stochastic solutions by evaluating the expansion on a subset of dominating gPC bases. Moreover, the proposed methods quantify the importance of the gPC bases in the probabilistic sense through inclusion probabilities. We design a Markov chain Monte Carlo (MCMC) sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed methods are suitable for, but not restricted to, problems whose stochastic solutions are sparse in the stochastic space with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the accuracy and performance of the proposed methods and make comparisons with other approaches on solving elliptic SPDEs with 1-, 14- and 40-random dimensions.
NASA Astrophysics Data System (ADS)
Lorin, E.; Yang, X.; Antoine, X.
2016-06-01
The paper is devoted to develop efficient domain decomposition methods for the linear Schrödinger equation beyond the semiclassical regime, which does not carry a small enough rescaled Planck constant for asymptotic methods (e.g. geometric optics) to produce a good accuracy, but which is too computationally expensive if direct methods (e.g. finite difference) are applied. This belongs to the category of computing middle-frequency wave propagation, where neither asymptotic nor direct methods can be directly used with both efficiency and accuracy. Motivated by recent works of the authors on absorbing boundary conditions (Antoine et al. (2014) [13] and Yang and Zhang (2014) [43]), we introduce Semiclassical Schwarz Waveform Relaxation methods (SSWR), which are seamless integrations of semiclassical approximation to Schwarz Waveform Relaxation methods. Two versions are proposed respectively based on Herman-Kluk propagation and geometric optics, and we prove the convergence and provide numerical evidence of efficiency and accuracy of these methods.
NASA Astrophysics Data System (ADS)
Fernández-Ramos, Antonio; Martínez-Núñez, Emilio; Smedarchina, Zorka; Vázquez, Saulo A.
2001-06-01
Rate constants and kinetic isotope effects are calculated for the CH3+ H2 → CH4+ H reaction by two theoretical methods: variational transition state theory with semiclassical corrections for tunneling and an approximate (linearized) semiclassical initial-value representation method, recently proposed by H. Wang, X. Sun, W.H. Miller [J. Chem. Phys. 108 (1998) 9726]. The theoretical results agree well with each other and with the experimental data in the temperature range 500-1500 K. For high temperatures, the differences between the two theoretical rate constants arise from the more accurate treatment of dividing surface recrossings by Miller's method.
NASA Astrophysics Data System (ADS)
Hashemi, M. S.; Baleanu, D.
2016-07-01
We propose a simple and accurate numerical scheme for solving the time fractional telegraph (TFT) equation within Caputo type fractional derivative. A fictitious coordinate ϑ is imposed onto the problem in order to transform the dependent variable u (x , t) into a new variable with an extra dimension. In the new space with the added fictitious dimension, a combination of method of line and group preserving scheme (GPS) is proposed to find the approximate solutions. This method preserves the geometric structure of the problem. Power and accuracy of this method has been illustrated through some examples of TFT equation.
Split-step non-paraxial beam propagation method
NASA Astrophysics Data System (ADS)
Sharma, Anurag; Agrawal, Arti
2004-06-01
A new method for solving the wave equation is presented, which, being non-paraxial, is applicable to wide-angle beam propagation. It shows very good stability characteristics in the sense that relatively larger step-sizes can be used. It is both faster and easier to implement. The method is based on symmetrized splitting of operators, one representing the propagation through a uniform medium and the other, the effect of the refractive index variation of the guiding structure. The method can be implemented in the FD-BPM, FFT-BPM and collocation schemes. The method is stable for a step size of 1 micron in a graded index waveguide with accuracy better than 0.001 in the field overlap integral for 1000-micron propagation. At a tilt angle of 50°, the method shows an error less than 0.001 with 0.25-micron step. In the benchmark test, the present method shows a relative power of ~0.96 in a 100 micron long waveguide with 1000 propagation steps and 800 sample points, while FD-BPM with Pade(2,2) approximation gives a relative power of 0.95 with 1000 sample points and 2048 propagation steps. Thus, the method requires fewer points, is easier to implement, faster, more accurate and highly stable.
NASA Astrophysics Data System (ADS)
Lisienko, V. G.; Malikov, G. K.; Titaev, A. A.
2014-12-01
The paper presents a new simple-to-use expression to calculate the total emissivity of a mixture of gases CO2 and H2O used for modeling heat transfer by radiation in industrial furnaces. The accuracy of this expression is evaluated using the exponential wide band model. It is found that the time taken to calculate the total emissivity in this expression is 1.5 times less than in other approximation methods.
NASA Astrophysics Data System (ADS)
Krause, Katharina; Klopper, Wim
2015-03-01
A generalization of the approximated coupled-cluster singles and doubles method and the algebraic diagrammatic construction scheme up to second order to two-component spinors obtained from a relativistic Hartree-Fock calculation is reported. Computational results for zero-field splittings of atoms and monoatomic cations, triplet lifetimes of two organic molecules, and the spin-forbidden part of the UV/Vis absorption spectrum of tris(ethylenediamine)cobalt(III) are presented.
Krause, Katharina; Klopper, Wim
2015-03-14
A generalization of the approximated coupled-cluster singles and doubles method and the algebraic diagrammatic construction scheme up to second order to two-component spinors obtained from a relativistic Hartree–Fock calculation is reported. Computational results for zero-field splittings of atoms and monoatomic cations, triplet lifetimes of two organic molecules, and the spin-forbidden part of the UV/Vis absorption spectrum of tris(ethylenediamine)cobalt(III) are presented.
Alcock, J. . Dept. of Environmental Science); Wagner, M.E. . Geology); Srogi, L.A. . Dept. of Geology and Astronomy)
1993-03-01
Post-Taconian transcurrent faulting in the Appalachian Piedmont presents a significant problem to workers attempting to reconstruct the Early Paleozoic tectonic history. One solution to the problem is to identify blocks that lie between zones of transcurrent faulting and that retain the Early Paleozoic arrangement of litho-tectonic units. The authors propose that a comparison of metamorphic histories of different units can be used to recognize blocks of this type. The Wilmington Complex (WC) arc terrane, the pre-Taconian Laurentian margin rocks (LM) exposed in basement-cored massifs, and the Wissahickon Group metapelites (WS) that lie between them are three litho-tectonic units in the PA-DE Piedmont that comprise a block assembled in the Early Paleozoic. Evidence supporting this interpretation includes: (1) Metamorphic and lithologic differences across the WC-WS contact and detailed geologic mapping of the contact that suggest thrusting of the WC onto the WS; (2) A metamorphic gradient in the WS with highest grade, including spinel-cordierite migmatites, adjacent to the WC indicating that peak metamorphism of the WS resulted from heating by the WC; (3) A metamorphic discontinuity at the WS-LM contact, evidence for emplacement of the WS onto the LM after WS peak metamorphism; (4) A correlation of mineral assemblage in the Cockeysville Marble of the LM with distance from the WS indicating that peak metamorphism of the LM occurred after emplacement of the WS; and (5) Early Paleozoic lower intercept zircon ages for the LM that are interpreted to date Taconian regional metamorphism. Analysis of metamorphism and its timing relative to thrusting suggest that the WS was associated with the WC before the WS was emplaced onto the LM during the Taconian. It follows that these units form a block that has not been significantly disrupted by later transcurrent shear.
Zhan, Choujun; Situ, Wuchao; Yeung, Lam Fat; Tsang, Peter Wai-Ming; Yang, Genke
2014-01-01
The inverse problem of identifying unknown parameters of known structure dynamical biological systems, which are modelled by ordinary differential equations or delay differential equations, from experimental data is treated in this paper. A two stage approach is adopted: first, combine spline theory and Nonlinear Programming (NLP), the parameter estimation problem is formulated as an optimization problem with only algebraic constraints; then, a new differential evolution (DE) algorithm is proposed to find a feasible solution. The approach is designed to handle problem of realistic size with noisy observation data. Three cases are studied to evaluate the performance of the proposed algorithm: two are based on benchmark models with priori-determined structure and parameters; the other one is a particular biological system with unknown model structure. In the last case, only a set of observation data available and in this case a nominal model is adopted for the identification. All the test systems were successfully identified by using a reasonable amount of experimental data within an acceptable computation time. Experimental evaluation reveals that the proposed method is capable of fast estimation on the unknown parameters with good precision.
Roze, Denis; Rousset, François
2003-12-01
Population structure affects the relative influence of selection and drift on the change in allele frequencies. Several models have been proposed recently, using diffusion approximations to calculate fixation probabilities, fixation times, and equilibrium properties of subdivided populations. We propose here a simple method to construct diffusion approximations in structured populations; it relies on general expressions for the expectation and variance in allele frequency change over one generation, in terms of partial derivatives of a "fitness function" and probabilities of genetic identity evaluated in a neutral model. In the limit of a very large number of demes, these probabilities can be expressed as functions of average allele frequencies in the metapopulation, provided that coalescence occurs on two different timescales, which is the case in the island model. We then use the method to derive expressions for the probability of fixation of new mutations, as a function of their dominance coefficient, the rate of partial selfing, and the rate of deme extinction. We obtain more precise approximations than those derived by recent work, in particular (but not only) when deme sizes are small. Comparisons with simulations show that the method gives good results as long as migration is stronger than selection.
Chatterjee, Koushik; Pastorczak, Ewa; Jawulski, Konrad; Pernal, Katarzyna
2016-06-28
A perfect-pairing generalized valence bond (GVB) approximation is known to be one of the simplest approximations, which allows one to capture the essence of static correlation in molecular systems. In spite of its attractive feature of being relatively computationally efficient, this approximation misses a large portion of dynamic correlation and does not offer sufficient accuracy to be generally useful for studying electronic structure of molecules. We propose to correct the GVB model and alleviate some of its deficiencies by amending it with the correlation energy correction derived from the recently formulated extended random phase approximation (ERPA). On the examples of systems of diverse electronic structures, we show that the resulting ERPA-GVB method greatly improves upon the GVB model. ERPA-GVB recovers most of the electron correlation and it yields energy barrier heights of excellent accuracy. Thanks to a balanced treatment of static and dynamic correlation, ERPA-GVB stays reliable when one moves from systems dominated by dynamic electron correlation to those for which the static correlation comes into play. PMID:27369501
NASA Astrophysics Data System (ADS)
Moraes Rêgo, Patrícia Helena; Viana da Fonseca Neto, João; Ferreira, Ernesto M.
2015-08-01
The main focus of this article is to present a proposal to solve, via UDUT factorisation, the convergence and numerical stability problems that are related to the covariance matrix ill-conditioning of the recursive least squares (RLS) approach for online approximations of the algebraic Riccati equation (ARE) solution associated with the discrete linear quadratic regulator (DLQR) problem formulated in the actor-critic reinforcement learning and approximate dynamic programming context. The parameterisations of the Bellman equation, utility function and dynamic system as well as the algebra of Kronecker product assemble a framework for the solution of the DLQR problem. The condition number and the positivity parameter of the covariance matrix are associated with statistical metrics for evaluating the approximation performance of the ARE solution via RLS-based estimators. The performance of RLS approximators is also evaluated in terms of consistence and polarisation when associated with reinforcement learning methods. The used methodology contemplates realisations of online designs for DLQR controllers that is evaluated in a multivariable dynamic system model.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. PMID:25528318
NASA Astrophysics Data System (ADS)
Mohammadpour, Mozhdeh; Jamshidi, Zahra
2016-05-01
The prospect of challenges in reproducing and interpretation of resonance Raman properties of molecules interacting with metal clusters has prompted the present research initiative. Resonance Raman spectra based on the time-dependent gradient approximation are examined in the framework of density functional theory using different methods for representing the exchange-correlation functional. In this work the performance of different XC functionals in the prediction of ground state properties, excitation state energies, and gradients are compared and discussed. Resonance Raman properties based on time-dependent gradient approximation for the strongly low-lying charge transfer states are calculated and compared for different methods. We draw the following conclusions: (1) for calculating the binding energy and ground state geometry, dispersion-corrected functionals give the best performance in comparison to ab initio calculations, (2) GGA and meta GGA functionals give good accuracy in calculating vibrational frequencies, (3) excited state energies determined by hybrid and range-separated hybrid functionals are in good agreement with EOM-CCSD calculations, and (4) in calculating resonance Raman properties GGA functionals give good and reasonable performance in comparison to the experiment; however, calculating the excited state gradient by using the hybrid functional on the hessian of GGA improves the results of the hybrid functional significantly. Finally, we conclude that the agreement of charge-transfer surface enhanced resonance Raman spectra with experiment is improved significantly by using the excited state gradient approximation.
NASA Technical Reports Server (NTRS)
Anderson, O. L.; Briley, W. R.; Mcdonald, H.
1978-01-01
An approximate analysis is presented for calculating three-dimensional, low Mach number, laminar viscous flows in curved passages with large secondary flows and corner boundary layers. The analysis is based on the decomposition of the overall velocity field into inviscid and viscous components with the overall velocity being determined from superposition. An incompressible vorticity transport equation is used to estimate inviscid secondary flow velocities to be used as corrections to the potential flow velocity field. A parabolized streamwise momentum equation coupled to an adiabatic energy equation and global continuity equation is used to obtain an approximate viscous correction to the pressure and longitudinal velocity fields. A collateral flow assumption is invoked to estimate the viscous correction to the transverse velocity fields. The approximate analysis is solved numerically using an implicit ADI solution for the viscous pressure and velocity fields. An iterative ADI procedure is used to solve for the inviscid secondary vorticity and velocity fields. This method was applied to computing the flow within a turbine vane passage with inlet flow conditions of M = 0.1 and M = 0.25, Re = 1000 and adiabatic walls, and for a constant radius curved rectangular duct with R/D = 12 and 14 and with inlet flow conditions of M = 0.1, Re = 1000, and adiabatic walls.
Thorn, Graeme J; King, John R
2016-01-01
The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. PMID:26561777
NASA Technical Reports Server (NTRS)
Hunter, Craig A.
1995-01-01
An analytical/numerical method has been developed to predict the static thrust performance of non-axisymmetric, two-dimensional convergent-divergent exhaust nozzles. Thermodynamic nozzle performance effects due to over- and underexpansion are modeled using one-dimensional compressible flow theory. Boundary layer development and skin friction losses are calculated using an approximate integral momentum method based on the classic karman-Polhausen solution. Angularity effects are included with these two models in a computational Nozzle Performance Analysis Code, NPAC. In four different case studies, results from NPAC are compared to experimental data obtained from subscale nozzle testing to demonstrate the capabilities and limitations of the NPAC method. In several cases, the NPAC prediction matched experimental gross thrust efficiency data to within 0.1 percent at a design NPR, and to within 0.5 percent at off-design conditions.
Ribeiro, Apoena A; Purger, Flávia; Rodrigues, Jonas A; Oliveira, Patrícia R A; Lussi, Adrian; Monteiro, Antonio Henrique; Alves, Haimon D L; Assis, Joaquim T; Vasconcellos, Adalberto B
2015-01-01
This in vivo study aimed to evaluate the influence of contact points on the approximal caries detection in primary molars, by comparing the performance of the DIAGNOdent pen and visual-tactile examination after tooth separation to bitewing radiography (BW). A total of 112 children were examined and 33 children were selected. In three periods (a, b, and c), 209 approximal surfaces were examined: (a) examiner 1 performed visual-tactile examination using the Nyvad criteria (EX1); examiner 2 used DIAGNOdent pen (LF1) and took BW; (b) 1 week later, after tooth separation, examiner 1 performed the second visual-tactile examination (EX2) and examiner 2 used DIAGNOdent again (LF2); (c) after tooth exfoliation, surfaces were directly examined using DIAGNOdent (LF3). Teeth were examined by computed microtomography as a reference standard. Analyses were based on diagnostic thresholds: D1: D 0 = health, D 1 –D 4 = disease; D2: D 0 , D 1 = health, D 2 –D 4 = disease; D3: D 0 –D 2 = health, D 3 , D 4 = disease. At D1, the highest sensitivity/specificity were observed for EX1 (1.00)/LF3 (0.68), respectively. At D2, the highest sensitivity/ specificity were observed for LF3 (0.69)/BW (1.00), respectively. At D3, the highest sensitivity/specificity were observed for LF3 (0.78)/EX1, EX2 and BW (1.00). EX1 showed higher accuracy values than LF1, and EX2 showed similar values to LF2. We concluded that the visual-tactile examination showed better results in detecting sound surfaces and approximal caries lesions without tooth separation. However, the effectiveness of approximal caries lesion detection of both methods was increased by the absence of contact points. Therefore, regardless of the method of detection, orthodontic separating elastics should be used as a complementary tool for the diagnosis of approximal noncavitated lesions in primary molars.
NASA Technical Reports Server (NTRS)
Buglia, James J.; Young, George R.; Timmons, Jesse D.; Brinkworth, Helen S.
1961-01-01
An analytical method has been developed which approximates the dispersion of a spinning symmetrical body in a vacuum, with time-varying mass and inertia characteristics, under the action of several external disturbances-initial pitching rate, thrust misalignment, and dynamic unbalance. The ratio of the roll inertia to the pitch or yaw inertia is assumed constant. Spin was found to be very effective in reducing the dispersion due to an initial pitch rate or thrust misalignment, but was completely Ineffective in reducing the dispersion of a dynamically unbalanced body.
NASA Astrophysics Data System (ADS)
Izsák, Róbert; Neese, Frank
2013-07-01
The 'chain of spheres' approximation, developed earlier for the efficient evaluation of the self-consistent field exchange term, is introduced here into the evaluation of the external exchange term of higher order correlation methods. Its performance is studied in the specific case of the spin-component-scaled third-order Møller--Plesset perturbation (SCS-MP3) theory. The results indicate that the approximation performs excellently in terms of both computer time and achievable accuracy. Significant speedups over a conventional method are obtained for larger systems and basis sets. Owing to this development, SCS-MP3 calculations on molecules of the size of penicillin (42 atoms) with a polarised triple-zeta basis set can be performed in ∼3 hours using 16 cores of an Intel Xeon E7-8837 processor with a 2.67 GHz clock speed, which represents a speedup by a factor of 8-9 compared to the previously most efficient algorithm. Thus, the increased accuracy offered by SCS-MP3 can now be explored for at least medium-sized molecules.
NASA Astrophysics Data System (ADS)
Punkevich, B. S.; Stal, N. L.; Stepanov, B. M.; Khokhlov, V. D.
The possibility of using the multigroup method to determine the physical properties of a beam plasma is substantiated, and the effectiveness of the application of this method is analyzed. The results obtained are compared with solutions of rigorous steady-state kinetic equations and approximate equations corresponding to a model of continuous slowdown and its variants. It is shown that, in the case of the complete slowdown of a fast electron and all the secondary electrons produced by it in He, 51 percent of the primary-electron energy is expended on the ionization of helium atoms, 16 percent is converted into atom thermal energy, and 33 percent is expended on atom excitation. Of this latter 33 percent, 21 percent is expended on the excitation of energy levels corresponding to optically allowed transitions.
Yoshikawa, Takeshi; Nakai, Hiromi
2015-01-30
Graphical processing units (GPUs) are emerging in computational chemistry to include Hartree-Fock (HF) methods and electron-correlation theories. However, ab initio calculations of large molecules face technical difficulties such as slow memory access between central processing unit and GPU and other shortfalls of GPU memory. The divide-and-conquer (DC) method, which is a linear-scaling scheme that divides a total system into several fragments, could avoid these bottlenecks by separately solving local equations in individual fragments. In addition, the resolution-of-the-identity (RI) approximation enables an effective reduction in computational cost with respect to the GPU memory. The present study implemented the DC-RI-HF code on GPUs using math libraries, which guarantee compatibility with future development of the GPU architecture. Numerical applications confirmed that the present code using GPUs significantly accelerated the HF calculations while maintaining accuracy.
NASA Astrophysics Data System (ADS)
Mozharovskiy, A. V.; Artemenko, A. A.; Mal'tsev, A. A.; Maslennikov, R. O.; Sevast'yanov, A. G.; Ssorin, V. N.
2015-11-01
We develop a combined method for calculating the characteristics of the integrated lens antennas for millimeter-wave wireless local radio-communication systems on the basis of the geometrical and physical optics approximations. The method is based on the concepts of geometrical optics for calculating the electromagnetic-field distribution on the lens surface (with allowance for multiple internal re-reflections) and physical optics for determining the antenna-radiated fields in the Fraunhofer zone. Using the developed combined method, we study various integrated lens antennas on the basis of the data on the used-lens shape and material and the primary-feed radiation model, which is specified analytically or by computer simulation. Optimal values of the cylindrical-extension length, which ensure the maximum antenna directivity equal to 19.1 and 23.8 dBi for the greater and smaller lenses, respectively, are obtained for the hemispherical quartz-glass lenses having the cylindrical extensions with radii of 7.5 and 12.5 mm. In this case, the scanning-angle range of the considered antennas is greater than ±20° for an admissible 2-dB decrease in the directivity of the deflected beam. The calculation results obtained using the developed method are confirmed by the experimental studies performed for the prototypes of the integrated quartz-glass lens antennas within the framework of this research.
NASA Astrophysics Data System (ADS)
ANDRE, Frédéric; HOU, Longfeng; SOLOVJOV, Vladimir P.
2016-01-01
The main restriction of k-distribution approaches for applications in radiative heat transfer in gaseous media arises from the use of a scaling or correlation assumption to treat non-uniform situations. It is shown that those cases can be handled exactly by using a multidimensional k-distribution that addresses the problem of spectral correlations without using any simplifying assumptions. Nevertheless, the approach cannot be suggested for engineering applications due to its computational cost. Accordingly, a more efficient method, based on the so-called Multi-Spectral Framework, is proposed to approximate the previous exact formulation. The model is assessed against reference LBL calculations and shown to outperform usual k-distribution approaches for radiative heat transfer in non-uniform media.
Liu, Ran; Wang, Chuan-Kui; Li, Zong-Liang
2016-01-01
Based on the ab initio calculation, a method of one-dimension transmission combined with three-dimension correction approximation (OTCTCA) is developed to investigate electron-transport properties of molecular junctions. The method considers that the functional molecule provides a spatial distribution of effective potential field for the electronic transport. The electrons are injected from one electrode by bias voltage, then transmit through the potential field around the functional molecule, at last are poured into the other electrode with a specific transmission probability which is calculated from one-dimension Schrödinger equation combined with three-dimension correction. The electron-transport properties of alkane diamines and 4, 4′-bipyridine molecular junctions are studied by applying OTCTCA method. The numerical results show that the conductance obviously exponentially decays with the increase of molecular length. When stretching molecular junctions, steps with a certain width are presented in conductance traces. Especially, in stretching process of 4, 4′-bipyridine molecular junction, if the terminal N atom is broken from flat part of electrode tip and exactly there is a surface Au atom on the tip nearby the N atom, the molecule generally turns to absorb on the surface Au atom, which further results in another lower conductance step in the traces as the experimental probing. PMID:26911451
NASA Technical Reports Server (NTRS)
Shirts, R. B.; Reinhardt, W. P.
1982-01-01
Substantial short time regularity, even in the chaotic regions of phase space, is found for what is seen as a large class of systems. This regularity manifests itself through the behavior of approximate constants of motion calculated by Pade summation of the Birkhoff-Gustavson normal form expansion; it is attributed to remnants of destroyed invariant tori in phase space. The remnant torus-like manifold structures are used to justify Einstein-Brillouin-Keller semiclassical quantization procedures for obtaining quantum energy levels, even in the absence of complete tori. They also provide a theoretical basis for the calculation of rate constants for intramolecular mode-mode energy transfer. These results are illustrated by means of a thorough analysis of the Henon-Heiles oscillator problem. Possible generality of the analysis is demonstrated by brief consideration of classical dynamics for the Barbanis Hamiltonian, Zeeman effect in hydrogen and recent results of Wolf and Hase (1980) for the H-C-C fragment.
Ball, J.R.
1986-04-01
This document is a supplement to a ''Handbook for Cost Estimating'' (NUREG/CR-3971) and provides specific guidance for developing ''quick'' approximate estimates of the cost of implementing generic regulatory requirements for nuclear power plants. A method is presented for relating the known construction costs for new nuclear power plants (as contained in the Energy Economic Data Base) to the cost of performing similar work, on a back-fit basis, at existing plants. Cost factors are presented to account for variations in such important cost areas as construction labor productivity, engineering and quality assurance, replacement energy, reworking of existing features, and regional variations in the cost of materials and labor. Other cost categories addressed in this handbook include those for changes in plant operating personnel and plant documents, licensee costs, NRC costs, and costs for other government agencies. Data sheets, worksheets, and appropriate cost algorithms are included to guide the user through preparation of rough estimates. A sample estimate is prepared using the method and the estimating tools provided.
Chalasani, P.; Saias, I.; Jha, S.
1996-04-08
As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.
Phenomenological applications of rational approximants
NASA Astrophysics Data System (ADS)
Gonzàlez-Solís, Sergi; Masjuan, Pere
2016-08-01
We illustrate the powerfulness of Padé approximants (PAs) as a summation method and explore one of their extensions, the so-called quadratic approximant (QAs), to access both space- and (low-energy) time-like (TL) regions. As an introductory and pedagogical exercise, the function 1 zln(1 + z) is approximated by both kind of approximants. Then, PAs are applied to predict pseudoscalar meson Dalitz decays and to extract Vub from the semileptonic B → πℓνℓ decays. Finally, the π vector form factor in the TL region is explored using QAs.
Edison, John R.; Monson, Peter A.
2014-07-14
Recently we have developed a dynamic mean field theory (DMFT) for lattice gas models of fluids in porous materials [P. A. Monson, J. Chem. Phys. 128(8), 084701 (2008)]. The theory can be used to describe the relaxation processes in the approach to equilibrium or metastable states for fluids in pores and is especially useful for studying system exhibiting adsorption/desorption hysteresis. In this paper we discuss the extension of the theory to higher order by means of the path probability method (PPM) of Kikuchi and co-workers. We show that this leads to a treatment of the dynamics that is consistent with thermodynamics coming from the Bethe-Peierls or Quasi-Chemical approximation for the equilibrium or metastable equilibrium states of the lattice model. We compare the results from the PPM with those from DMFT and from dynamic Monte Carlo simulations. We find that the predictions from PPM are qualitatively similar to those from DMFT but give somewhat improved quantitative accuracy, in part due to the superior treatment of the underlying thermodynamics. This comes at the cost of greater computational expense associated with the larger number of equations that must be solved.
NASA Astrophysics Data System (ADS)
Zhao, ShuanFeng; Liang, Lin; Xu, GuangHua; Wang, Jing; Zhang, WenMing
2013-10-01
Spalling or pitting is the main manifestation of fault development in a bearing during the earlier stages. Previous studies indicated that the vibration signal of a bearing with a spall-like defect may be composed of two parts; the first part originates from the entry of the rolling element into the spall-like area, and the second part refers to the exit from the fault region. The quantitative diagnosis of a spall-like fault of the rolling element bearing can be realised if the entry-exit event times can be accurately calculated. However, the vibration signal of a faulty bearing is usually non-stationary and non-linear with strong background noise interference. Meanwhile, the signal energy from the early spall region is too low to accurately register the features of the entry-exit event in the time domain. In this work, the approximate entropy (ApEn) method and empirical mode decomposition (EMD) are applied to clearly separate the entry-exit events, and thus the size of the spall-like fault is estimated. First, the original acceleration vibration signal is decomposed by EMD, and some useful intrinsic mode function (IMF) components are obtained. Second, the concept of IMF-ApEn is introduced, which can directly reflect the complexity of the IMFs using the actual vibration signal. The IMF-ApEn distributions of different noise signals illustrate that the process of complexity changes when a full spectrum process is split into its IMFs. Third, a unit white noise IMF-ApEn distribution template serves as a sieve to extract the (effective intrinsic mode functions) EIMF components, and thus the entry and exit events in the response signal are distinguished. The IMF-ApEn method is further compared with a previous method (N. Sawalhi's method) to test its superiority. The dynamic effects are investigated when the ball element enters a spall-like region by computer simulation. The simulation and the experimental results show that the approach to the quantitative diagnosis of a
Rasin, A.
1994-04-01
We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.
Adaptive approximation models in optimization
Voronin, A.N.
1995-05-01
The paper proposes a method for optimization of functions of several variables that substantially reduces the number of objective function evaluations compared to traditional methods. The method is based on the property of iterative refinement of approximation models of the optimand function in approximation domains that contract to the extremum point. It does not require subjective specification of the starting point, step length, or other parameters of the search procedure. The method is designed for efficient optimization of unimodal functions of several (not more than 10-15) variables and can be applied to find the global extremum of polymodal functions and also for optimization of scalarized forms of vector objective functions.
NASA Astrophysics Data System (ADS)
Niiniluoto, Ilkka
2014-03-01
Approximation of laws is an important theme in the philosophy of science. If we can make sense of the idea that two scientific laws are "close" to each other, then we can also analyze such methodological notions as approximate explanation of laws, approximate reduction of theories, approximate empirical success of theories, and approximate truth of laws. Proposals for measuring the distance between quantitative scientific laws were given in Niiniluoto (1982, 1987). In this paper, these definitions are reconsidered as a response to the interesting critical remarks by Liu (1999).
Hayes, S; Taylor, R; Paterson, A
2005-12-01
Forensic facial approximation involves building a likeness of the head and face on the skull of an unidentified individual, with the aim that public broadcast of the likeness will trigger recognition in those who knew the person in life. This paper presents an overview of the collaborative practice between Ronn Taylor (Forensic Sculptor to the Victorian Institute of Forensic Medicine) and Detective Sergeant Adrian Paterson (Victoria Police Criminal Identification Squad). This collaboration involves clay modelling to determine an approximation of the person's head shape and feature location, with surface texture and more speculative elements being rendered digitally onto an image of the model. The advantages of this approach are that through clay modelling anatomical contouring is present, digital enhancement resolves some of the problems of visual perception of a representation, such as edge and shape determination, and the approximation can be easily modified as and when new information is received. PMID:16353755
Approximations for photoelectron scattering
NASA Astrophysics Data System (ADS)
Fritzsche, V.
1989-04-01
The errors of several approximations in the theoretical approach of photoelectron scattering are systematically studied, in tungsten, for electron energies ranging from 10 to 1000 eV. The large inaccuracies of the plane-wave approximation (PWA) are substantially reduced by means of effective scattering amplitudes in the modified small-scattering-centre approximation (MSSCA). The reduced angular momentum expansion (RAME) is so accurate that it allows reliable calculations of multiple-scattering contributions for all the energies considered.
Approximate line shapes for hydrogen
NASA Technical Reports Server (NTRS)
Sutton, K.
1978-01-01
Two independent methods are presented for calculating radiative transport within hydrogen lines. In Method 1, a simple equation is proposed for calculating the line shape. In Method 2, the line shape is assumed to be a dispersion profile and an equation is presented for calculating the half half-width. The results obtained for the line shapes and curves of growth by the two approximate methods are compared with similar results using the detailed line shapes by Vidal et al.
NASA Technical Reports Server (NTRS)
Dutta, Soumitra
1988-01-01
A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.
Rebolini, Elisa; Izsák, Róbert; Reine, Simen Sommerfelt; Helgaker, Trygve; Pedersen, Thomas Bondo
2016-08-01
We compare the performance of three approximate methods for speeding up evaluation of the exchange contribution in Hartree-Fock and hybrid Kohn-Sham calculations: the chain-of-spheres algorithm (COSX; Neese , F. Chem. Phys. 2008 , 356 , 98 - 109 ), the pair-atomic resolution-of-identity method (PARI-K; Merlot , P. J. Comput. Chem. 2013 , 34 , 1486 - 1496 ), and the auxiliary density matrix method (ADMM; Guidon , M. J. Chem. Theory Comput. 2010 , 6 , 2348 - 2364 ). Both the efficiency relative to that of a conventional linear-scaling algorithm and the accuracy of total, atomization, and orbital energies are compared for a subset containing 25 of the 200 molecules in the Rx200 set using double-, triple-, and quadruple-ζ basis sets. The accuracy of relative energies is further compared for small alkane conformers (ACONF test set) and Diels-Alder reactions (DARC test set). Overall, we find that the COSX method provides good accuracy for orbital energies as well as total and relative energies, and the method delivers a satisfactory speedup. The PARI-K and in particular ADMM algorithms require further development and optimization to fully exploit their indisputable potential.
Approximation by hinge functions
Faber, V.
1997-05-01
Breiman has defined {open_quotes}hinge functions{close_quotes} for use as basis functions in least squares approximations to data. A hinge function is the max (or min) function of two linear functions. In this paper, the author assumes the existence of smooth function f(x) and a set of samples of the form (x, f(x)) drawn from a probability distribution {rho}(x). The author hopes to find the best fitting hinge function h(x) in the least squares sense. There are two problems with this plan. First, Breiman has suggested an algorithm to perform this fit. The author shows that this algorithm is not robust and also shows how to create examples on which the algorithm diverges. Second, if the author tries to use the data to minimize the fit in the usual discrete least squares sense, the functional that must be minimized is continuous in the variables, but has a derivative which jumps at the data. This paper takes a different approach. This approach is an example of a method that the author has developed called {open_quotes}Monte Carlo Regression{close_quotes}. (A paper on the general theory is in preparation.) The author shall show that since the function f is continuous, the analytic form of the least squares equation is continuously differentiable. A local minimum is solved for by using Newton`s method, where the entries of the Hessian are estimated directly from the data by Monte Carlo. The algorithm has the desirable properties that it is quadratically convergent from any starting guess sufficiently close to a solution and that each iteration requires only a linear system solve.
An approximation technique for jet impingement flow
Najafi, Mahmoud; Fincher, Donald; Rahni, Taeibi; Javadi, KH.; Massah, H.
2015-03-10
The analytical approximate solution of a non-linear jet impingement flow model will be demonstrated. We will show that this is an improvement over the series approximation obtained via the Adomian decomposition method, which is itself, a powerful method for analysing non-linear differential equations. The results of these approximations will be compared to the Runge-Kutta approximation in order to demonstrate their validity.
ERIC Educational Resources Information Center
Dong, Nianbo; Lipsey, Mark
2014-01-01
When randomized control trials (RCT) are not feasible, researchers seek other methods to make causal inference, e.g., propensity score methods. One of the underlined assumptions for the propensity score methods to obtain unbiased treatment effect estimates is the ignorability assumption, that is, conditional on the propensity score, treatment…
NASA Astrophysics Data System (ADS)
Zhang, Shen; Wang, Hongwei; Kang, Wei; Zhang, Ping; He, X. T.
2016-04-01
An extended first-principles molecular dynamics (FPMD) method based on Kohn-Sham scheme is proposed to elevate the temperature limit of the FPMD method in the calculation of dense plasmas. The extended method treats the wave functions of high energy electrons as plane waves analytically and thus expands the application of the FPMD method to the region of hot dense plasmas without suffering from the formidable computational costs. In addition, the extended method inherits the high accuracy of the Kohn-Sham scheme and keeps the information of electronic structures. This gives an edge to the extended method in the calculation of mixtures of plasmas composed of heterogeneous ions, high-Z dense plasmas, lowering of ionization potentials, X-ray absorption/emission spectra, and opacities, which are of particular interest to astrophysics, inertial confinement fusion engineering, and laboratory astrophysics.
NASA Astrophysics Data System (ADS)
Muszyński, Z.
2013-12-01
Correct assessment of construction safety requires reliable information about geometrical shape of the analyzed object. The least square method is the most popular method to calculate object deviation between theoretical geometry and the real object shape measured with geodetic methods. The paper presents the possibility of using robust estimation methods on the example of Hampel's method. Deviation values obtained in this way are resistant to outliers influence and are more reliable. This problem is illustrated by a hyperbola which is approximated in survey points (measured by terrestrial laser scanning) localized on the generating line of the cooling tower shell in one of its axial vertical cross-section.
NASA Astrophysics Data System (ADS)
Asgharzadeh, Hafez; Borazjani, Iman
2014-11-01
Time step-size restrictions and low convergence rates are major bottle necks for implicit solution of the Navier-Stokes in simulations involving complex geometries with moving boundaries. Newton-Krylov method (NKM) is a combination of a Newton-type method for super-linearly convergent solution of nonlinear equations and Krylov subspace methods for solving the Newton correction equations, which can theoretically address both bottle necks. The efficiency of this method vastly depends on the Jacobian forming scheme e.g. automatic differentiation is very expensive and Jacobian-free methods slow down as the mesh is refined. A novel, computationally efficient analytical Jacobian for NKM was developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered curvilinear grids with immersed boundaries. The NKM was validated and verified against Taylor-Green vortex and pulsatile flow in a 90 degree bend and efficiently handles complex geometries such as an intracranial aneurysm with multiple overset grids, pulsatile inlet flow and immersed boundaries. The NKM method is shown to be more efficient than the semi-implicit Runge-Kutta methods and Jabobian-free Newton-Krylov methods. We believe NKM can be applied to many CFD techniques to decrease the computational cost. This work was supported partly by the NIH Grant R03EB014860, and the computational resources were partly provided by Center for Computational Research (CCR) at University at Buffalo.
Wavelet Sparse Approximate Inverse Preconditioners
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Tang, W.-P.; Wan, W. L.
1996-01-01
There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.
ERIC Educational Resources Information Center
Wolff, Hans
This paper deals with a stochastic process for the approximation of the root of a regression equation. This process was first suggested by Robbins and Monro. The main result here is a necessary and sufficient condition on the iteration coefficients for convergence of the process (convergence with probability one and convergence in the quadratic…
NASA Astrophysics Data System (ADS)
Huang, Siendong
2009-11-01
The nonlocality of quantum states on a bipartite system \\mathcal {A+B} is tested by comparing probabilistic outcomes of two local observables of different subsystems. For a fixed observable A of the subsystem \\mathcal {A,} its optimal approximate double A' of the other system \\mathcal {B} is defined such that the probabilistic outcomes of A' are almost similar to those of the fixed observable A. The case of σ-finite standard von Neumann algebras is considered and the optimal approximate double A' of an observable A is explicitly determined. The connection between optimal approximate doubles and quantum correlations is explained. Inspired by quantum states with perfect correlation, like Einstein-Podolsky-Rosen states and Bohm states, the nonlocality power of an observable A for general quantum states is defined as the similarity that the outcomes of A look like the properties of the subsystem \\mathcal {B} corresponding to A'. As an application of optimal approximate doubles, maximal Bell correlation of a pure entangled state on \\mathcal {B}(\\mathbb {C}^{2})\\otimes \\mathcal {B}(\\mathbb {C}^{2}) is found explicitly.
Computer Experiments for Function Approximations
Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C
2007-10-15
This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative.
NASA Astrophysics Data System (ADS)
de Lorenzo, Salvatore; Bianco, Francesca; Del Pezzo, Edoardo
2013-06-01
The coda normalization method is one of the most used methods in the inference of attenuation parameters Qα and Qβ. Since, in this method, the geometrical spreading exponent γ is an unknown model parameter, the most part of studies assumes a fixed γ, generally equal to 1. However γ and Q could be also jointly inferred from the non-linear inversion of coda-normalized logarithms of amplitudes, but the trade-off between γ and Q could give rise to unreasonable values of these parameters. To minimize the trade-off between γ and Q, an inversion method based on a parabolic expression of the coda-normalization equation has been developed. The method has been applied to the waveforms recorded during the 1997 Umbria-Marche seismic crisis. The Akaike criterion has been used to compare results of the parabolic model with those of the linear model, corresponding to γ = 1. A small deviation from the spherical geometrical spreading has been inferred, but this is accompanied by a significant variation of Qα and Qβ values. For almost all the considered stations, Qα smaller than Qβ has been inferred, confirming that seismic attenuation, in the Umbria-Marche region, is controlled by crustal pore fluids.
Optimizing the Zeldovich approximation
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Pellman, Todd F.; Shandarin, Sergei F.
1994-01-01
We have recently learned that the Zeldovich approximation can be successfully used for a far wider range of gravitational instability scenarios than formerly proposed; we study here how to extend this range. In previous work (Coles, Melott and Shandarin 1993, hereafter CMS) we studied the accuracy of several analytic approximations to gravitational clustering in the mildly nonlinear regime. We found that what we called the 'truncated Zeldovich approximation' (TZA) was better than any other (except in one case the ordinary Zeldovich approximation) over a wide range from linear to mildly nonlinear (sigma approximately 3) regimes. TZA was specified by setting Fourier amplitudes equal to zero for all wavenumbers greater than k(sub nl), where k(sub nl) marks the transition to the nonlinear regime. Here, we study the cross correlation of generalized TZA with a group of n-body simulations for three shapes of window function: sharp k-truncation (as in CMS), a tophat in coordinate space, or a Gaussian. We also study the variation in the crosscorrelation as a function of initial truncation scale within each type. We find that k-truncation, which was so much better than other things tried in CMS, is the worst of these three window shapes. We find that a Gaussian window e(exp(-k(exp 2)/2k(exp 2, sub G))) applied to the initial Fourier amplitudes is the best choice. It produces a greatly improved crosscorrelation in those cases which most needed improvement, e.g. those with more small-scale power in the initial conditions. The optimum choice of kG for the Gaussian window is (a somewhat spectrum-dependent) 1 to 1.5 times k(sub nl). Although all three windows produce similar power spectra and density distribution functions after application of the Zeldovich approximation, the agreement of the phases of the Fourier components with the n-body simulation is better for the Gaussian window. We therefore ascribe the success of the best-choice Gaussian window to its superior treatment
NASA Astrophysics Data System (ADS)
Cassam-Chenaï, Patrick; Suo, Bingbing; Liu, Wenjian
2015-07-01
We introduce the electron-nucleus mean-field configuration-interaction (EN-MFCI) approach. It consists in building an effective Hamiltonian for the electrons taking into account a mean field due to the nuclear motion and, conversely, in building an effective Hamiltonian for the nuclear motion taking into account a mean field due to the electrons. The eigenvalue problems of these Hamiltonians are solved in basis sets giving partial eigensolutions for the active degrees of freedom (DOF's), that is to say, either for the electrons or for nuclear motion. The process can be iterated or electron and nuclear motion DOF's can be contracted in a CI calculation. In the EN-MFCI reduction of the molecular Schrödinger equation to an electronic and a nuclear problem, the electronic wave functions do not depend parametrically upon nuclear coordinates. So, it is different from traditional adiabatic methods. Furthermore, when contracting electronic and nuclear functions, a direct product basis set is built in contrast with methods which treat electrons and nuclei on the same footing, but where electron-nucleus explicitly correlated coordinates are used. Also, the EN-MFCI approach can make use of the partition of molecular DOF's into translational, rotational, and internal DOF's. As a result, there is no need to eliminate translations and rotations from the calculation, and the convergence of vibrational levels is facilitated by the use of appropriate internal coordinates. The method is illustrated on diatomic molecules.
Pilati, D A
1976-11-01
Biological data on the temperature preferences of fish indicate that, in general, they will be attracted to thermal discharges in the winter. This attraction to warmer temperatures increases their vulnerability to cold shock if the discharge heat source is discontinued. A scheme is proposed to predict the near-field thermal plume environmental temperatures during a power transient. This method can be applied to any jet discharge for which a steady-state model exists. The proposed transient model has been applied to an operating reactor. The predicted results illustrate how very rapidly the maximum temperatures decrease after an abrupt shutdown. This model can be employed to help assess the impact where cold shock may be a problem. Such predictions could also be the basis for restrictions on scheduled midwinter plant shutdowns.
Quantum tunneling beyond semiclassical approximation
NASA Astrophysics Data System (ADS)
Banerjee, Rabin; Ranjan Majhi, Bibhas
2008-06-01
Hawking radiation as tunneling by Hamilton-Jacobi method beyond semiclassical approximation is analysed. We compute all quantum corrections in the single particle action revealing that these are proportional to the usual semiclassical contribution. We show that a simple choice of the proportionality constants reproduces the one loop back reaction effect in the spacetime, found by conformal field theory methods, which modifies the Hawking temperature of the black hole. Using the law of black hole mechanics we give the corrections to the Bekenstein-Hawking area law following from the modified Hawking temperature. Some examples are explicitly worked out.
NASA Technical Reports Server (NTRS)
Schwenke, David W.
1993-01-01
We report the results of a series of calculations of state-to-state integral cross sections for collisions between O and nonvibrating H2O in the gas phase on a model nonreactive potential energy surface. The dynamical methods used include converged quantum mechanical scattering calculations, the j(z) conserving centrifugal sudden (j(z)-CCS) approximation, and quasi-classical trajectory (QCT) calculations. We consider three total energies 0.001, 0.002, and 0.005 E(h) and the nine initial states with rotational angular momentum less than or equal to 2 (h/2 pi). The j(z)-CCS approximation gives good results, while the QCT method can be quite unreliable for transitions to specific rotational sublevels. However, the QCT cross sections summed over final sublevels and averaged over initial sublevels are in better agreement with the quantum results.
Ultrafast approximation for phylogenetic bootstrap.
Minh, Bui Quang; Nguyen, Minh Anh Thi; von Haeseler, Arndt
2013-05-01
Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and the Shimodaira-Hasegawa-like approximate likelihood ratio test have been introduced to speed up the bootstrap. Here, we suggest an ultrafast bootstrap approximation approach (UFBoot) to compute the support of phylogenetic groups in maximum likelihood (ML) based trees. To achieve this, we combine the resampling estimated log-likelihood method with a simple but effective collection scheme of candidate trees. We also propose a stopping rule that assesses the convergence of branch support values to automatically determine when to stop collecting candidate trees. UFBoot achieves a median speed up of 3.1 (range: 0.66-33.3) to 10.2 (range: 1.32-41.4) compared with RAxML RBS for real DNA and amino acid alignments, respectively. Moreover, our extensive simulations show that UFBoot is robust against moderate model violations and the support values obtained appear to be relatively unbiased compared with the conservative standard bootstrap. This provides a more direct interpretation of the bootstrap support. We offer an efficient and easy-to-use software (available at http://www.cibiv.at/software/iqtree) to perform the UFBoot analysis with ML tree inference.
Approximate Counting of Graphical Realizations
2015-01-01
In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994
Tuna, Deniz; Lefrancois, Daniel; Wolański, Łukasz; Gozem, Samer; Schapiro, Igor; Andruniów, Tadeusz; Dreuw, Andreas; Olivucci, Massimo
2015-12-01
As a minimal model of the chromophore of rhodopsin proteins, the penta-2,4-dieniminium cation (PSB3) poses a challenging test system for the assessment of electronic-structure methods for the exploration of ground- and excited-state potential-energy surfaces, the topography of conical intersections, and the dimensionality (topology) of the branching space. Herein, we report on the performance of the approximate linear-response coupled-cluster method of second order (CC2) and the algebraic-diagrammatic-construction scheme of the polarization propagator of second and third orders (ADC(2) and ADC(3)). For the ADC(2) method, we considered both the strict and extended variants (ADC(2)-s and ADC(2)-x). For both CC2 and ADC methods, we also tested the spin-component-scaled (SCS) and spin-opposite-scaled (SOS) variants. We have explored several ground- and excited-state reaction paths, a circular path centered around the S1/S0 surface crossing, and a 2D scan of the potential-energy surfaces along the branching space. We find that the CC2 and ADC methods yield a different dimensionality of the intersection space. While the ADC methods yield a linear intersection topology, we find a conical intersection topology for the CC2 method. We present computational evidence showing that the linear-response CC2 method yields a surface crossing between the reference state and the first response state featuring characteristics that are expected for a true conical intersection. Finally, we test the performance of these methods for the approximate geometry optimization of the S1/S0 minimum-energy conical intersection and compare the geometries with available data from multireference methods. The present study provides new insight into the performance of linear-response CC2 and polarization-propagator ADC methods for molecular electronic spectroscopy and applications in computational photochemistry. PMID:26642989
Tuna, Deniz; Lefrancois, Daniel; Wolański, Łukasz; Gozem, Samer; Schapiro, Igor; Andruniów, Tadeusz; Dreuw, Andreas; Olivucci, Massimo
2015-12-01
As a minimal model of the chromophore of rhodopsin proteins, the penta-2,4-dieniminium cation (PSB3) poses a challenging test system for the assessment of electronic-structure methods for the exploration of ground- and excited-state potential-energy surfaces, the topography of conical intersections, and the dimensionality (topology) of the branching space. Herein, we report on the performance of the approximate linear-response coupled-cluster method of second order (CC2) and the algebraic-diagrammatic-construction scheme of the polarization propagator of second and third orders (ADC(2) and ADC(3)). For the ADC(2) method, we considered both the strict and extended variants (ADC(2)-s and ADC(2)-x). For both CC2 and ADC methods, we also tested the spin-component-scaled (SCS) and spin-opposite-scaled (SOS) variants. We have explored several ground- and excited-state reaction paths, a circular path centered around the S1/S0 surface crossing, and a 2D scan of the potential-energy surfaces along the branching space. We find that the CC2 and ADC methods yield a different dimensionality of the intersection space. While the ADC methods yield a linear intersection topology, we find a conical intersection topology for the CC2 method. We present computational evidence showing that the linear-response CC2 method yields a surface crossing between the reference state and the first response state featuring characteristics that are expected for a true conical intersection. Finally, we test the performance of these methods for the approximate geometry optimization of the S1/S0 minimum-energy conical intersection and compare the geometries with available data from multireference methods. The present study provides new insight into the performance of linear-response CC2 and polarization-propagator ADC methods for molecular electronic spectroscopy and applications in computational photochemistry.
Roy, Swapnoneel; Thakur, Ashok Kumar
2008-01-01
Genome rearrangements have been modelled by a variety of primitives such as reversals, transpositions, block moves and block interchanges. We consider such a genome rearrangement primitive Strip Exchanges. Given a permutation, the challenge is to sort it by using minimum number of strip exchanges. A strip exchanging move interchanges the positions of two chosen strips so that they merge with other strips. The strip exchange problem is to sort a permutation using minimum number of strip exchanges. We present here the first non-trivial 2-approximation algorithm to this problem. We also observe that sorting by strip-exchanges is fixed-parameter-tractable. Lastly we discuss the application of strip exchanges in a different area Optical Character Recognition (OCR) with an example.
Hierarchical Approximate Bayesian Computation
Turner, Brandon M.; Van Zandt, Trisha
2013-01-01
Approximate Bayesian computation (ABC) is a powerful technique for estimating the posterior distribution of a model’s parameters. It is especially important when the model to be fit has no explicit likelihood function, which happens for computational (or simulation-based) models such as those that are popular in cognitive neuroscience and other areas in psychology. However, ABC is usually applied only to models with few parameters. Extending ABC to hierarchical models has been difficult because high-dimensional hierarchical models add computational complexity that conventional ABC cannot accommodate. In this paper we summarize some current approaches for performing hierarchical ABC and introduce a new algorithm called Gibbs ABC. This new algorithm incorporates well-known Bayesian techniques to improve the accuracy and efficiency of the ABC approach for estimation of hierarchical models. We then use the Gibbs ABC algorithm to estimate the parameters of two models of signal detection, one with and one without a tractable likelihood function. PMID:24297436
Approximate Bayesian multibody tracking.
Lanz, Oswald
2006-09-01
Visual tracking of multiple targets is a challenging problem, especially when efficiency is an issue. Occlusions, if not properly handled, are a major source of failure. Solutions supporting principled occlusion reasoning have been proposed but are yet unpractical for online applications. This paper presents a new solution which effectively manages the trade-off between reliable modeling and computational efficiency. The Hybrid Joint-Separable (HJS) filter is derived from a joint Bayesian formulation of the problem, and shown to be efficient while optimal in terms of compact belief representation. Computational efficiency is achieved by employing a Markov random field approximation to joint dynamics and an incremental algorithm for posterior update with an appearance likelihood that implements a physically-based model of the occlusion process. A particle filter implementation is proposed which achieves accurate tracking during partial occlusions, while in cases of complete occlusion, tracking hypotheses are bound to estimated occlusion volumes. Experiments show that the proposed algorithm is efficient, robust, and able to resolve long-term occlusions between targets with identical appearance. PMID:16929730
Approximate Analysis of Semiconductor Laser Arrays
NASA Technical Reports Server (NTRS)
Marshall, William K.; Katz, Joseph
1987-01-01
Simplified equation yields useful information on gains and output patterns. Theoretical method based on approximate waveguide equation enables prediction of lateral modes of gain-guided planar array of parallel semiconductor lasers. Equation for entire array solved directly using piecewise approximation of index of refraction by simple functions without customary approximation based on coupled waveguid modes of individual lasers. Improved results yield better understanding of laser-array modes and help in development of well-behaved high-power semiconductor laser arrays.
APPROXIMATE MULTIPHASE FLOW MODELING BY CHARACTERISTIC METHODS
The flow of petroleum hydrocarbons, organic solvents and other liquids that are immiscible with water presents the nation with some of the most difficult subsurface remediation problems. One aspect of contaminant transport associated releases of such liquids is the transport as a...
Fermion tunneling beyond semiclassical approximation
NASA Astrophysics Data System (ADS)
Majhi, Bibhas Ranjan
2009-02-01
Applying the Hamilton-Jacobi method beyond the semiclassical approximation prescribed in R. Banerjee and B. R. Majhi, J. High Energy Phys.JHEPFG1029-8479 06 (2008) 09510.1088/1126-6708/2008/06/095 for the scalar particle, Hawking radiation as tunneling of the Dirac particle through an event horizon is analyzed. We show that, as before, all quantum corrections in the single particle action are proportional to the usual semiclassical contribution. We also compute the modifications to the Hawking temperature and Bekenstein-Hawking entropy for the Schwarzschild black hole. Finally, the coefficient of the logarithmic correction to entropy is shown to be related with the trace anomaly.
NASA Astrophysics Data System (ADS)
Lubkin, Elihu
2002-04-01
In 1993,(E. & T. Lubkin, Int.J.Theor.Phys. 32), 993 (1993) we gave exact mean trace
IONIS: Approximate atomic photoionization intensities
NASA Astrophysics Data System (ADS)
Heinäsmäki, Sami
2012-02-01
A program to compute relative atomic photoionization cross sections is presented. The code applies the output of the multiconfiguration Dirac-Fock method for atoms in the single active electron scheme, by computing the overlap of the bound electron states in the initial and final states. The contribution from the single-particle ionization matrix elements is assumed to be the same for each final state. This method gives rather accurate relative ionization probabilities provided the single-electron ionization matrix elements do not depend strongly on energy in the region considered. The method is especially suited for open shell atoms where electronic correlation in the ionic states is large. Program summaryProgram title: IONIS Catalogue identifier: AEKK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1149 No. of bytes in distributed program, including test data, etc.: 12 877 Distribution format: tar.gz Programming language: Fortran 95 Computer: Workstations Operating system: GNU/Linux, Unix Classification: 2.2, 2.5 Nature of problem: Photoionization intensities for atoms. Solution method: The code applies the output of the multiconfiguration Dirac-Fock codes Grasp92 [1] or Grasp2K [2], to compute approximate photoionization intensities. The intensity is computed within the one-electron transition approximation and by assuming that the sum of the single-particle ionization probabilities is the same for all final ionic states. Restrictions: The program gives nonzero intensities for those transitions where only one electron is removed from the initial configuration(s). Shake-type many-electron transitions are not computed. The ionized shell must be closed in the initial state. Running time: Few seconds for a
Approximate Shortest Path Queries Using Voronoi Duals
NASA Astrophysics Data System (ADS)
Honiden, Shinichi; Houle, Michael E.; Sommer, Christian; Wolff, Martin
We propose an approximation method to answer point-to-point shortest path queries in undirected edge-weighted graphs, based on random sampling and Voronoi duals. We compute a simplification of the graph by selecting nodes independently at random with probability p. Edges are generated as the Voronoi dual of the original graph, using the selected nodes as Voronoi sites. This overlay graph allows for fast computation of approximate shortest paths for general, undirected graphs. The time-quality tradeoff decision can be made at query time. We provide bounds on the approximation ratio of the path lengths as well as experimental results. The theoretical worst-case approximation ratio is bounded by a logarithmic factor. Experiments show that our approximation method based on Voronoi duals has extremely fast preprocessing time and efficiently computes reasonably short paths.
Approximate Solutions Of Equations Of Steady Diffusion
NASA Technical Reports Server (NTRS)
Edmonds, Larry D.
1992-01-01
Rigorous analysis yields reliable criteria for "best-fit" functions. Improved "curve-fitting" method yields approximate solutions to differential equations of steady-state diffusion. Method applies to problems in which rates of diffusion depend linearly or nonlinearly on concentrations of diffusants, approximate solutions analytic or numerical, and boundary conditions of Dirichlet type, of Neumann type, or mixture of both types. Applied to equations for diffusion of charge carriers in semiconductors in which mobilities and lifetimes of charge carriers depend on concentrations.
Alternative approximation concepts for space frame synthesis
NASA Technical Reports Server (NTRS)
Lust, R. V.; Schmit, L. A.
1985-01-01
A method for space frame synthesis based on the application of a full gamut of approximation concepts is presented. It is found that with the thoughtful selection of design space, objective function approximation, constraint approximation and mathematical programming problem formulation options it is possible to obtain near minimum mass designs for a significant class of space frame structural systems while requiring fewer than 10 structural analyses. Example problems are presented which demonstrate the effectiveness of the method for frame structures subjected to multiple static loading conditions with limits on structural stiffness and strength.
NASA Astrophysics Data System (ADS)
Sahal-Bréchot, Sylvie; Dimitrijević, Milan; Nessib, Nabil
2014-06-01
"Stark broadening" theory and calculations have been extensively developed for about 50 years. The theory can now be considered as mature for many applications, especially for accurate spectroscopic diagnostics and modeling, in astrophysics, laboratory plasma physics and technological plasmas, as well. This requires the knowledge of numerous collisional line profiles. In order to meet these needs, the "SCP" (semiclassical perturbation) method and numerical code were created and developed. The SCP code is now extensively used for the needs of spectroscopic diagnostics and modeling, and the results of the published calculations are displayed in the STARK-B database. The aim of the present paper is to introduce the main approximations leading to the impact of semiclassical perturbation method and to give formulae entering the numerical SCP code, in order to understand the validity conditions of the method and of the results; and also to understand some regularities and systematic trends. This would also allow one to compare the method and its results to those of other methods and codes.
Novel bivariate moment-closure approximations.
Krishnarajah, Isthrinayagy; Marion, Glenn; Gibson, Gavin
2007-08-01
Nonlinear stochastic models are typically intractable to analytic solutions and hence, moment-closure schemes are used to provide approximations to these models. Existing closure approximations are often unable to describe transient aspects caused by extinction behaviour in a stochastic process. Recent work has tackled this problem in the univariate case. In this study, we address this problem by introducing novel bivariate moment-closure methods based on mixture distributions. Novel closure approximations are developed, based on the beta-binomial, zero-modified distributions and the log-Normal, designed to capture the behaviour of the stochastic SIS model with varying population size, around the threshold between persistence and extinction of disease. The idea of conditional dependence between variables of interest underlies these mixture approximations. In the first approximation, we assume that the distribution of infectives (I) conditional on population size (N) is governed by the beta-binomial and for the second form, we assume that I is governed by zero-modified beta-binomial distribution where in either case N follows a log-Normal distribution. We analyse the impact of coupling and inter-dependency between population variables on the behaviour of the approximations developed. Thus, the approximations are applied in two situations in the case of the SIS model where: (1) the death rate is independent of disease status; and (2) the death rate is disease-dependent. Comparison with simulation shows that these mixture approximations are able to predict disease extinction behaviour and describe transient aspects of the process.
Approximation concepts for efficient structural synthesis
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Miura, H.
1976-01-01
It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.
Approximate Bruechner orbitals in electron propagator calculations
Ortiz, J.V.
1999-12-01
Orbitals and ground-state correlation amplitudes from the so-called Brueckner doubles approximation of coupled-cluster theory provide a useful reference state for electron propagator calculations. An operator manifold with hold, particle, two-hole-one-particle and two-particle-one-hole components is chosen. The resulting approximation, third-order algebraic diagrammatic construction [2ph-TDA, ADC (3)] and 3+ methods. The enhanced versatility of this approximation is demonstrated through calculations on valence ionization energies, core ionization energies, electron detachment energies of anions, and on a molecule with partial biradical character, ozone.
Multidimensional stochastic approximation Monte Carlo.
Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383
Multidimensional stochastic approximation Monte Carlo
NASA Astrophysics Data System (ADS)
Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .
Semiclassics beyond the diagonal approximation
NASA Astrophysics Data System (ADS)
Turek, Marko
2004-05-01
The statistical properties of the energy spectrum of classically chaotic closed quantum systems are the central subject of this thesis. It has been conjectured by O.Bohigas, M.-J.Giannoni and C.Schmit that the spectral statistics of chaotic systems is universal and can be described by random-matrix theory. This conjecture has been confirmed in many experiments and numerical studies but a formal proof is still lacking. In this thesis we present a semiclassical evaluation of the spectral form factor which goes beyond M.V.Berry's diagonal approximation. To this end we extend a method developed by M.Sieber and K.Richter for a specific system: the motion of a particle on a two-dimensional surface of constant negative curvature. In particular we prove that these semiclassical methods reproduce the random-matrix theory predictions for the next to leading order correction also for a much wider class of systems, namely non-uniformly hyperbolic systems with f>2 degrees of freedom. We achieve this result by extending the configuration-space approach of M.Sieber and K.Richter to a canonically invariant phase-space approach.
Approximating Functions with Exponential Functions
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2005-01-01
The possibility of approximating a function with a linear combination of exponential functions of the form e[superscript x], e[superscript 2x], ... is considered as a parallel development to the notion of Taylor polynomials which approximate a function with a linear combination of power function terms. The sinusoidal functions sin "x" and cos "x"…
Radioactivity computation of steady-state and pulsed fusion reactors operation
Attaya, H.
1994-06-01
Different mathematical methods are used to calculate the nuclear transmutation in steady-state and pulsed neutron irradiation. These methods are the Schuer decomposition, the eigenvector decomposition, and the Pade approximation of the matrix exponential function. In the case of the linear decay chain approximation, a simple algorithm is used to evaluate the transition matrices.
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-12-22
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-08-18
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
State space approximation for general fractional order dynamic systems
NASA Astrophysics Data System (ADS)
Liang, Shu; Peng, Cheng; Liao, Zeng; Wang, Yong
2014-10-01
Approximations for general fractional order dynamic systems are of much theoretical and practical interest. In this paper, a new approximate method for fractional order integrator is proposed. The poles of the approximate model are unrelated to the order of integrator. This feature shows benefits on extending the algorithm to the systems containing various fractional orders. Then a unified approximate method is derived for general fractional order linear or nonlinear dynamic systems via combining the proposed new method with the distributed frequency model approach. Numerical examples are given to show the wide applicability of our method and to illustrate the acceptable accuracy for approximations as well.
Mathematical algorithms for approximate reasoning
NASA Technical Reports Server (NTRS)
Murphy, John H.; Chay, Seung C.; Downs, Mary M.
1988-01-01
Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away
Approximating random quantum optimization problems
NASA Astrophysics Data System (ADS)
Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.
2013-06-01
We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.
Parameter Choices for Approximation by Harmonic Splines
NASA Astrophysics Data System (ADS)
Gutting, Martin
2016-04-01
The approximation by harmonic trial functions allows the construction of the solution of boundary value problems in geoscience, e.g., in terms of harmonic splines. Due to their localizing properties regional modeling or the improvement of a global model in a part of the Earth's surface is possible with splines. Fast multipole methods have been developed for some cases of the occurring kernels to obtain a fast matrix-vector multiplication. The main idea of the fast multipole algorithm consists of a hierarchical decomposition of the computational domain into cubes and a kernel approximation for the more distant points. This reduces the numerical effort of the matrix-vector multiplication from quadratic to linear in reference to the number of points for a prescribed accuracy of the kernel approximation. The application of the fast multipole method to spline approximation which also allows the treatment of noisy data requires the choice of a smoothing parameter. We investigate different methods to (ideally automatically) choose this parameter with and without prior knowledge of the noise level. Thereby, the performance of these methods is considered for different types of noise in a large simulation study. Applications to gravitational field modeling are presented as well as the extension to boundary value problems where the boundary is the known surface of the Earth itself.
Approximation and compression with sparse orthonormal transforms.
Sezer, Osman Gokhan; Guleryuz, Onur G; Altunbasak, Yucel
2015-08-01
We propose a new transform design method that targets the generation of compression-optimized transforms for next-generation multimedia applications. The fundamental idea behind transform compression is to exploit regularity within signals such that redundancy is minimized subject to a fidelity cost. Multimedia signals, in particular images and video, are well known to contain a diverse set of localized structures, leading to many different types of regularity and to nonstationary signal statistics. The proposed method designs sparse orthonormal transforms (SOTs) that automatically exploit regularity over different signal structures and provides an adaptation method that determines the best representation over localized regions. Unlike earlier work that is motivated by linear approximation constructs and model-based designs that are limited to specific types of signal regularity, our work uses general nonlinear approximation ideas and a data-driven setup to significantly broaden its reach. We show that our SOT designs provide a safe and principled extension of the Karhunen-Loeve transform (KLT) by reducing to the KLT on Gaussian processes and by automatically exploiting non-Gaussian statistics to significantly improve over the KLT on more general processes. We provide an algebraic optimization framework that generates optimized designs for any desired transform structure (multiresolution, block, lapped, and so on) with significantly better n -term approximation performance. For each structure, we propose a new prototype codec and test over a database of images. Simulation results show consistent increase in compression and approximation performance compared with conventional methods. PMID:25823033
Quickly Approximating the Distance Between Two Objects
NASA Technical Reports Server (NTRS)
Hammen, David
2009-01-01
A method of quickly approximating the distance between two objects (one smaller, regarded as a point; the other larger and complexly shaped) has been devised for use in computationally simulating motions of the objects for the purpose of planning the motions to prevent collisions.
Private Medical Record Linkage with Approximate Matching
Durham, Elizabeth; Xue, Yuan; Kantarcioglu, Murat; Malin, Bradley
2010-01-01
Federal regulations require patient data to be shared for reuse in a de-identified manner. However, disparate providers often share data on overlapping populations, such that a patient’s record may be duplicated or fragmented in the de-identified repository. To perform unbiased statistical analysis in a de-identified setting, it is crucial to integrate records that correspond to the same patient. Private record linkage techniques have been developed, but most methods are based on encryption and preclude the ability to determine similarity, decreasing the accuracy of record linkage. The goal of this research is to integrate a private string comparison method that uses Bloom filters to provide an approximate match, with a medical record linkage algorithm. We evaluate the approach with 100,000 patients’ identifiers and demographics from the Vanderbilt University Medical Center. We demonstrate that the private approximation method achieves sensitivity that is, on average, 3% higher than previous methods. PMID:21346965
Congruence Approximations for Entrophy Endowed Hyperbolic Systems
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Saini, Subhash (Technical Monitor)
1998-01-01
Building upon the standard symmetrization theory for hyperbolic systems of conservation laws, congruence properties of the symmetrized system are explored. These congruence properties suggest variants of several stabilized numerical discretization procedures for hyperbolic equations (upwind finite-volume, Galerkin least-squares, discontinuous Galerkin) that benefit computationally from congruence approximation. Specifically, it becomes straightforward to construct the spatial discretization and Jacobian linearization for these schemes (given a small amount of derivative information) for possible use in Newton's method, discrete optimization, homotopy algorithms, etc. Some examples will be given for the compressible Euler equations and the nonrelativistic MHD equations using linear and quadratic spatial approximation.
NASA Astrophysics Data System (ADS)
Jong, Un-Gi; Yu, Chol-Jun; Ri, Jin-Song; Kim, Nam-Hyok; Ri, Guk-Chol
2016-09-01
Extensive studies have demonstrated the promising capability of the organic-inorganic hybrid halide perovskite CH3NH3PbI3 in solar cells with a high power conversion efficiency exceeding 20%. However, the intrinsic as well as extrinsic instabilities of this material remain the major challenge to the commercialization of perovskite-based solar cells. Mixing halides is expected to resolve this problem. Here, we investigate the effect of chemical substitution in the position of the halogen atom on the structural, electronic, and optical properties of mixed halide perovskites CH3NH3Pb (I1-xBrx) 3 with a pseudocubic phase using the virtual crystal approximation method within density functional theory. With an increase of Br content x from 0.0 to 1.0, the lattice constant decreases in proportion to x with the function of a (x )=6.420 -0.333 x (Å), while the band gap and the exciton binding energy increase with the quadratic function of Eg(x ) =1.542 +0.374 x +0.185 x2 (eV) and the linear function of Eb(x ) =0.045 +0.057 x (eV), respectively. The photoabsorption coefficients are also calculated, showing a blueshift of the absorption onsets for higher Br contents. We calculate the phase decomposition energy of these materials and analyze the electronic charge density difference to estimate the material stability. Based on the calculated results, we suggest that the best match between efficiency and stability can be achieved at x ≈0.2 in CH3NH3Pb (I1-xBrx) 3 perovskites.
Efficient computational methods for electromagnetic imaging with applications to 3D magnetotellurics
NASA Astrophysics Data System (ADS)
Kordy, Michal Adam
The motivation for this work is the forward and inverse problem for magnetotellurics, a frequency domain electromagnetic remote-sensing geophysical method used in mineral, geothermal, and groundwater exploration. The dissertation consists of four papers. In the first paper, we prove the existence and uniqueness of a representation of any vector field in H(curl) by a vector lying in H(curl) and H(div). It allows us to represent electric or magnetic fields by another vector field, for which nodal finite element approximation may be used in the case of non-constant electromagnetic properties. With this approach, the system matrix does not become ill-posed for low-frequency. In the second paper, we consider hexahedral finite element approximation of an electric field for the magnetotelluric forward problem. The near-null space of the system matrix for low frequencies makes the numerical solution unstable in the air. We show that the proper solution may obtained by applying a correction on the null space of the curl. It is done by solving a Poisson equation using discrete Helmholtz decomposition. We parallelize the forward code on multicore workstation with large RAM. In the next paper, we use the forward code in the inversion. Regularization of the inversion is done by using the second norm of the logarithm of conductivity. The data space Gauss-Newton approach allows for significant savings in memory and computational time. We show the efficiency of the method by considering a number of synthetic inversions and we apply it to real data collected in Cascade Mountains. The last paper considers a cross-frequency interpolation of the forward response as well as the Jacobian. We consider Pade approximation through model order reduction and rational Krylov subspace. The interpolating frequencies are chosen adaptively in order to minimize the maximum error of interpolation. Two error indicator functions are compared. We prove a theorem of almost always lucky failure in the
The Cell Cycle Switch Computes Approximate Majority
NASA Astrophysics Data System (ADS)
Cardelli, Luca; Csikász-Nagy, Attila
2012-09-01
Both computational and biological systems have to make decisions about switching from one state to another. The `Approximate Majority' computational algorithm provides the asymptotically fastest way to reach a common decision by all members of a population between two possible outcomes, where the decision approximately matches the initial relative majority. The network that regulates the mitotic entry of the cell-cycle in eukaryotes also makes a decision before it induces early mitotic processes. Here we show that the switch from inactive to active forms of the mitosis promoting Cyclin Dependent Kinases is driven by a system that is related to both the structure and the dynamics of the Approximate Majority computation. We investigate the behavior of these two switches by deterministic, stochastic and probabilistic methods and show that the steady states and temporal dynamics of the two systems are similar and they are exchangeable as components of oscillatory networks.
Laplace approximation in measurement error models.
Battauz, Michela
2011-05-01
Likelihood analysis for regression models with measurement errors in explanatory variables typically involves integrals that do not have a closed-form solution. In this case, numerical methods such as Gaussian quadrature are generally employed. However, when the dimension of the integral is large, these methods become computationally demanding or even unfeasible. This paper proposes the use of the Laplace approximation to deal with measurement error problems when the likelihood function involves high-dimensional integrals. The cases considered are generalized linear models with multiple covariates measured with error and generalized linear mixed models with measurement error in the covariates. The asymptotic order of the approximation and the asymptotic properties of the Laplace-based estimator for these models are derived. The method is illustrated using simulations and real-data analysis.
Signal recovery by best feasible approximation.
Combettes, P L
1993-01-01
The objective of set theoretical signal recovery is to find a feasible signal in the form of a point in the intersection of S of sets modeling the information available about the problem. For problems in which the true signal is known to lie near a reference signal r, the solution should not be any feasible point but one which best approximates r, i.e., a projection of r onto S. Such a solution cannot be obtained by the feasibility algorithms currently in use, e.g., the method of projections onto convex sets (POCS) and its offsprings. Methods for projecting a point onto the intersection of closed and convex sets in a Hilbert space are introduced and applied to signal recovery by best feasible approximation of a reference signal. These algorithms are closely related to the above projection methods, to which they add little computational complexity.
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1984-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems. Previously announced in STAR as N83-33589
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1982-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems.
Pythagorean Approximations and Continued Fractions
ERIC Educational Resources Information Center
Peralta, Javier
2008-01-01
In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…
Error Bounds for Interpolative Approximations.
ERIC Educational Resources Information Center
Gal-Ezer, J.; Zwas, G.
1990-01-01
Elementary error estimation in the approximation of functions by polynomials as a computational assignment, error-bounding functions and error bounds, and the choice of interpolation points are discussed. Precalculus and computer instruction are used on some of the calculations. (KR)
Chemical Laws, Idealization and Approximation
NASA Astrophysics Data System (ADS)
Tobin, Emma
2013-07-01
This paper examines the notion of laws in chemistry. Vihalemm ( Found Chem 5(1):7-22, 2003) argues that the laws of chemistry are fundamentally the same as the laws of physics they are all ceteris paribus laws which are true "in ideal conditions". In contrast, Scerri (2000) contends that the laws of chemistry are fundamentally different to the laws of physics, because they involve approximations. Christie ( Stud Hist Philos Sci 25:613-629, 1994) and Christie and Christie ( Of minds and molecules. Oxford University Press, New York, pp. 34-50, 2000) agree that the laws of chemistry are operationally different to the laws of physics, but claim that the distinction between exact and approximate laws is too simplistic to taxonomise them. Approximations in chemistry involve diverse kinds of activity and often what counts as a scientific law in chemistry is dictated by the context of its use in scientific practice. This paper addresses the question of what makes chemical laws distinctive independently of the separate question as to how they are related to the laws of physics. From an analysis of some candidate ceteris paribus laws in chemistry, this paper argues that there are two distinct kinds of ceteris paribus laws in chemistry; idealized and approximate chemical laws. Thus, while Christie ( Stud Hist Philos Sci 25:613-629, 1994) and Christie and Christie ( Of minds and molecules. Oxford University Press, New York, pp. 34--50, 2000) are correct to point out that the candidate generalisations in chemistry are diverse and heterogeneous, a distinction between idealizations and approximations can nevertheless be used to successfully taxonomise them.
Testing the frozen flow approximation
NASA Technical Reports Server (NTRS)
Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro
1993-01-01
We investigate the accuracy of the frozen-flow approximation (FFA), recently proposed by Matarrese, et al. (1992), for following the nonlinear evolution of cosmological density fluctuations under gravitational instability. We compare a number of statistics between results of the FFA and n-body simulations, including those used by Melott, Pellman & Shandarin (1993) to test the Zel'dovich approximation. The FFA performs reasonably well in a statistical sense, e.g. in reproducing the counts-in-cell distribution, at small scales, but it does poorly in the crosscorrelation with n-body which means it is generally not moving mass to the right place, especially in models with high small-scale power.
Exponential Approximations Using Fourier Series Partial Sums
NASA Technical Reports Server (NTRS)
Banerjee, Nana S.; Geer, James F.
1997-01-01
The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.
Approximate reasoning using terminological models
NASA Technical Reports Server (NTRS)
Yen, John; Vaidya, Nitin
1992-01-01
Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.
Spline Approximation of Thin Shell Dynamics
NASA Technical Reports Server (NTRS)
delRosario, R. C. H.; Smith, R. C.
1996-01-01
A spline-based method for approximating thin shell dynamics is presented here. While the method is developed in the context of the Donnell-Mushtari thin shell equations, it can be easily extended to the Byrne-Flugge-Lur'ye equations or other models for shells of revolution as warranted by applications. The primary requirements for the method include accuracy, flexibility and efficiency in smart material applications. To accomplish this, the method was designed to be flexible with regard to boundary conditions, material nonhomogeneities due to sensors and actuators, and inputs from smart material actuators such as piezoceramic patches. The accuracy of the method was also of primary concern, both to guarantee full resolution of structural dynamics and to facilitate the development of PDE-based controllers which ultimately require real-time implementation. Several numerical examples provide initial evidence demonstrating the efficacy of the method.
Microscopic justification of the equal filling approximation
Perez-Martin, Sara; Robledo, L. M.
2008-07-15
The equal filling approximation, a procedure widely used in mean-field calculations to treat the dynamics of odd nuclei in a time-reversal invariant way, is justified as the consequence of a variational principle over an average energy functional. The ideas of statistical quantum mechanics are employed in the justification. As an illustration of the method, the ground and lowest-lying states of some octupole deformed radium isotopes are computed.
A coastal ocean model with subgrid approximation
NASA Astrophysics Data System (ADS)
Walters, Roy A.
2016-06-01
A wide variety of coastal ocean models exist, each having attributes that reflect specific application areas. The model presented here is based on finite element methods with unstructured grids containing triangular and quadrilateral elements. The model optimizes robustness, accuracy, and efficiency by using semi-implicit methods in time in order to remove the most restrictive stability constraints, by using a semi-Lagrangian advection approximation to remove Courant number constraints, and by solving a wave equation at the discrete level for enhanced efficiency. An added feature is the approximation of the effects of subgrid objects. Here, the Reynolds-averaged Navier-Stokes equations and the incompressibility constraint are volume averaged over one or more computational cells. This procedure gives rise to new terms which must be approximated as a closure problem. A study of tidal power generation is presented as an example of this method. A problem that arises is specifying appropriate thrust and power coefficients for the volume averaged velocity when they are usually referenced to free stream velocity. A new contribution here is the evaluation of three approaches to this problem: an iteration procedure and two mapping formulations. All three sets of results for thrust (form drag) and power are in reasonable agreement.
Median Approximations for Genomes Modeled as Matrices.
Zanetti, Joao Paulo Pereira; Biller, Priscila; Meidanis, Joao
2016-04-01
The genome median problem is an important problem in phylogenetic reconstruction under rearrangement models. It can be stated as follows: Given three genomes, find a fourth that minimizes the sum of the pairwise rearrangement distances between it and the three input genomes. In this paper, we model genomes as matrices and study the matrix median problem using the rank distance. It is known that, for any metric distance, at least one of the corners is a [Formula: see text]-approximation of the median. Our results allow us to compute up to three additional matrix median candidates, all of them with approximation ratios at least as good as the best corner, when the input matrices come from genomes. We also show a class of instances where our candidates are optimal. From the application point of view, it is usually more interesting to locate medians farther from the corners, and therefore, these new candidates are potentially more useful. In addition to the approximation algorithm, we suggest a heuristic to get a genome from an arbitrary square matrix. This is useful to translate the results of our median approximation algorithm back to genomes, and it has good results in our tests. To assess the relevance of our approach in the biological context, we ran simulated evolution tests and compared our solutions to those of an exact DCJ median solver. The results show that our method is capable of producing very good candidates. PMID:27072561
Achievements and Problems in Diophantine Approximation Theory
NASA Astrophysics Data System (ADS)
Sprindzhuk, V. G.
1980-08-01
ContentsIntroduction I. Metrical theory of approximation on manifolds § 1. The basic problem § 2. Brief survey of results § 3. The principal conjecture II. Metrical theory of transcendental numbers § 1. Mahler's classification of numbers § 2. Metrical characterization of numbers with a given type of approximation § 3. Further problems III. Approximation of algebraic numbers by rationals § 1. Simultaneous approximations § 2. The inclusion of p-adic metrics § 3. Effective improvements of Liouville's inequality IV. Estimates of linear forms in logarithms of algebraic numbers § 1. The basic method § 2. Survey of results § 3. Estimates in the p-adic metric V. Diophantine equations § 1. Ternary exponential equations § 2. The Thue and Thue-Mahler equations § 3. Equations of hyperelliptic type § 4. Algebraic-exponential equations VI. The arithmetic structure of polynomials and the class number § 1. The greatest prime divisor of a polynomial in one variable § 2. The greatest prime divisor of a polynomial in two variables § 3. Square-free divisors of polynomials and the class number § 4. The general problem of the size of the class number Conclusion References
Median Approximations for Genomes Modeled as Matrices.
Zanetti, Joao Paulo Pereira; Biller, Priscila; Meidanis, Joao
2016-04-01
The genome median problem is an important problem in phylogenetic reconstruction under rearrangement models. It can be stated as follows: Given three genomes, find a fourth that minimizes the sum of the pairwise rearrangement distances between it and the three input genomes. In this paper, we model genomes as matrices and study the matrix median problem using the rank distance. It is known that, for any metric distance, at least one of the corners is a [Formula: see text]-approximation of the median. Our results allow us to compute up to three additional matrix median candidates, all of them with approximation ratios at least as good as the best corner, when the input matrices come from genomes. We also show a class of instances where our candidates are optimal. From the application point of view, it is usually more interesting to locate medians farther from the corners, and therefore, these new candidates are potentially more useful. In addition to the approximation algorithm, we suggest a heuristic to get a genome from an arbitrary square matrix. This is useful to translate the results of our median approximation algorithm back to genomes, and it has good results in our tests. To assess the relevance of our approach in the biological context, we ran simulated evolution tests and compared our solutions to those of an exact DCJ median solver. The results show that our method is capable of producing very good candidates.
Beyond the Born approximation in one-dimensional profile reconstruction
NASA Astrophysics Data System (ADS)
Trantanella, Charles J.; Dudley, Donald G.; Nabulsi, Khalid A.
1995-07-01
A new method of one-dimensional profile reconstruction is presented. The method is based on an extension to the Born approximation and relates measurements of the scattered field to the Fourier transform of the slab profile. Since the Born and our new approximations are most valid at low frequency, we utilize superresolution to recover high-frequency information and then invert for the slab profile. Finally, we vary different parameters and examine the resulting reconstructions. approximation, profile reconstruction, superresolution.
Discrete extrinsic curvatures and approximation of surfaces by polar polyhedra
NASA Astrophysics Data System (ADS)
Garanzha, V. A.
2010-01-01
Duality principle for approximation of geometrical objects (also known as Eu-doxus exhaustion method) was extended and perfected by Archimedes in his famous tractate “Measurement of circle”. The main idea of the approximation method by Archimedes is to construct a sequence of pairs of inscribed and circumscribed polygons (polyhedra) which approximate curvilinear convex body. This sequence allows to approximate length of curve, as well as area and volume of the bodies and to obtain error estimates for approximation. In this work it is shown that a sequence of pairs of locally polar polyhedra allows to construct piecewise-affine approximation to spherical Gauss map, to construct convergent point-wise approximations to mean and Gauss curvature, as well as to obtain natural discretizations of bending energies. The Suggested approach can be applied to nonconvex surfaces and in the case of multiple dimensions.
Approximately Independent Features of Languages
NASA Astrophysics Data System (ADS)
Holman, Eric W.
To facilitate the testing of models for the evolution of languages, the present paper offers a set of linguistic features that are approximately independent of each other. To find these features, the adjusted Rand index (R‧) is used to estimate the degree of pairwise relationship among 130 linguistic features in a large published database. Many of the R‧ values prove to be near zero, as predicted for independent features, and a subset of 47 features is found with an average R‧ of -0.0001. These 47 features are recommended for use in statistical tests that require independent units of analysis.
The structural physical approximation conjecture
NASA Astrophysics Data System (ADS)
Shultz, Fred
2016-01-01
It was conjectured that the structural physical approximation (SPA) of an optimal entanglement witness is separable (or equivalently, that the SPA of an optimal positive map is entanglement breaking). This conjecture was disproved, first for indecomposable maps and more recently for decomposable maps. The arguments in both cases are sketched along with important related results. This review includes background material on topics including entanglement witnesses, optimality, duality of cones, decomposability, and the statement and motivation for the SPA conjecture so that it should be accessible for a broad audience.
Generalized Gradient Approximation Made Simple
Perdew, J.P.; Burke, K.; Ernzerhof, M.
1996-10-01
Generalized gradient approximations (GGA{close_quote}s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. {copyright} {ital 1996 The American Physical Society.}
Structural design utilizing updated, approximate sensitivity derivatives
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.
1993-01-01
A method to improve the computational efficiency of structural optimization algorithms is investigated. In this method, the calculations of 'exact' sensitivity derivatives of constraint functions are performed only at selected iterations during the optimization process. The sensitivity derivatives utilized within other iterations are approximate derivatives which are calculated using an inexpensive derivative update formula. Optimization results are presented for an analytic optimization problem (i.e., one having simple polynomial expressions for the objective and constraint functions) and for two structural optimization problems. The structural optimization results indicate that up to a factor of three improvement in computation time is possible when using the updated sensitivity derivatives.
The blind leading the blind: Mutual refinement of approximate theories
NASA Technical Reports Server (NTRS)
Kedar, Smadar T.; Bresina, John L.; Dent, C. Lisa
1991-01-01
The mutual refinement theory, a method for refining world models in a reactive system, is described. The method detects failures, explains their causes, and repairs the approximate models which cause the failures. The approach focuses on using one approximate model to refine another.
Plasma Physics Approximations in Ares
Managan, R. A.
2015-01-08
Lee & More derived analytic forms for the transport properties of a plasma. Many hydro-codes use their formulae for electrical and thermal conductivity. The coefficients are complex functions of Fermi-Dirac integrals, F_{n}( μ/θ ), the chemical potential, μ or ζ = ln(1+e^{ μ/θ} ), and the temperature, θ = kT. Since these formulae are expensive to compute, rational function approximations were fit to them. Approximations are also used to find the chemical potential, either μ or ζ . The fits use ζ as the independent variable instead of μ/θ . New fits are provided for A^{α} (ζ ),A^{β} (ζ ), ζ, f(ζ ) = (1 + e^{-μ/θ})F_{1/2}(μ/θ), F_{1/2}'/F_{1/2}, F_{c}^{α}, and F_{c}^{β}. In each case the relative error of the fit is minimized since the functions can vary by many orders of magnitude. The new fits are designed to exactly preserve the limiting values in the non-degenerate and highly degenerate limits or as ζ→ 0 or ∞. The original fits due to Lee & More and George Zimmerman are presented for comparison.
Wavelet Approximation in Data Assimilation
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Atlas, Robert (Technical Monitor)
2002-01-01
Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.
Approximate solutions for certain bidomain problems in electrocardiography
NASA Astrophysics Data System (ADS)
Johnston, Peter R.
2008-10-01
The simulation of problems in electrocardiography using the bidomain model for cardiac tissue often creates issues with satisfaction of the boundary conditions required to obtain a solution. Recent studies have proposed approximate methods for solving such problems by satisfying the boundary conditions only approximately. This paper presents an analysis of their approximations using a similar method, but one which ensures that the boundary conditions are satisfied during the whole solution process. Also considered are additional functional forms, used in the approximate solutions, which are more appropriate to specific boundary conditions. The analysis shows that the approximations introduced by Patel and Roth [Phys. Rev. E 72, 051931 (2005)] generally give accurate results. However, there are certain situations where functional forms based on the geometry of the problem under consideration can give improved approximations. It is also demonstrated that the recent methods are equivalent to different approaches to solving the same problems introduced 20years earlier.
Analytic approximation to randomly oriented spheroid extinction
NASA Astrophysics Data System (ADS)
Evans, B. T. N.; Fournier, G. R.
1993-12-01
The estimation of electromagnetic extinction through dust or other nonspherical atmospheric aerosols and hydrosols is an essential first step in the evaluation of the performance of all electro-optic systems. Investigations were conducted to reduce the computational burden in calculating the extinction from nonspherical particles. An analytic semi-empirical approximation to the extinction efficiency Q(sub ext) for randomly oriented spheroids, based on an extension of the anomalous diffraction formula, is given and compared with the extended boundary condition or T-matrix method. This will allow for better and more general modeling of obscurants. Using this formula, Q(sub ext) can be evaluated over 10,000 times faster than with previous methods. This approximation has been verified for complex refractive indices m=n-ik, where n ranges from one to infinity and k from zero to infinity, and aspect ratios of 0.2 to 5. It is believed that the approximation is uniformly valid over all size parameters and aspect ratios. It has the correct Rayleigh, refractive index, and large particle asymptotic behaviors. The accuracy and limitations of this formula are extensively discussed.
Hamilton's Principle and Approximate Solutions to Problems in Classical Mechanics
ERIC Educational Resources Information Center
Schlitt, D. W.
1977-01-01
Shows how to use the Ritz method for obtaining approximate solutions to problems expressed in variational form directly from the variational equation. Application of this method to classical mechanics is given. (MLH)
Approximate inverse preconditioners for general sparse matrices
Chow, E.; Saad, Y.
1994-12-31
Preconditioned Krylov subspace methods are often very efficient in solving sparse linear matrices that arise from the discretization of elliptic partial differential equations. However, for general sparse indifinite matrices, the usual ILU preconditioners fail, often because of the fact that the resulting factors L and U give rise to unstable forward and backward sweeps. In such cases, alternative preconditioners based on approximate inverses may be attractive. We are currently developing a number of such preconditioners based on iterating on each column to get the approximate inverse. For this approach to be efficient, the iteration must be done in sparse mode, i.e., we must use sparse-matrix by sparse-vector type operatoins. We will discuss a few options and compare their performance on standard problems from the Harwell-Boeing collection.
Approximated solutions to Born-Infeld dynamics
NASA Astrophysics Data System (ADS)
Ferraro, Rafael; Nigro, Mauro
2016-02-01
The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.
Planetary ephemerides approximation for radar astronomy
NASA Technical Reports Server (NTRS)
Sadr, R.; Shahshahani, M.
1991-01-01
The planetary ephemerides approximation for radar astronomy is discussed, and, in particular, the effect of this approximation on the performance of the programmable local oscillator (PLO) used in Goldstone Solar System Radar is presented. Four different approaches are considered and it is shown that the Gram polynomials outperform the commonly used technique based on Chebyshev polynomials. These methods are used to analyze the mean square, the phase error, and the frequency tracking error in the presence of the worst case Doppler shift that one may encounter within the solar system. It is shown that in the worst case the phase error is under one degree and the frequency tracking error less than one hertz when the frequency to the PLO is updated every millisecond.
Some approximation concepts for structural synthesis
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Farshi, B.
1974-01-01
An efficient automated minimum weight design procedure is presented which is applicable to sizing structural systems that can be idealized by truss, shear panel, and constant strain triangles. Static stress and displacement constraints under alternative loading conditions are considered. The optimization algorithm is an adaptation of the method of inscribed hyperspheres and high efficiency is achieved by using several approximation concepts including temporary deletion of noncritical constraints, design variable linking, and Taylor series expansions for response variables in terms of design variables. Optimum designs for several planar and space truss examples problems are presented. The results reported support the contention that the innovative use of approximation concepts in structural synthesis can produce significant improvements in efficiency.
Some approximation concepts for structural synthesis.
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Farshi, B.
1973-01-01
An efficient automated minimum weight design procedure is presented which is applicable to sizing structural systems that can be idealized by truss, shear panel, and constant strain triangles. Static stress and displacement constraints under alternative loading conditions are considered. The optimization algorithm is an adaptation of the method of inscribed hyperspheres and high efficiency is achieved by using several approximation concepts including temporary deletion of noncritical constraints, design variable linking, and Taylor series expansions for response variables in terms of design variables. Optimum designs for several planar and space truss example problems are presented. The results reported support the contention that the innovative use of approximation concepts in structural synthesis can produce significant improvements in efficiency.
Approximating metal-insulator transitions
NASA Astrophysics Data System (ADS)
Danieli, Carlo; Rayanov, Kristian; Pavlov, Boris; Martin, Gaven; Flach, Sergej
2015-12-01
We consider quantum wave propagation in one-dimensional quasiperiodic lattices. We propose an iterative construction of quasiperiodic potentials from sequences of potentials with increasing spatial period. At each finite iteration step, the eigenstates reflect the properties of the limiting quasiperiodic potential properties up to a controlled maximum system size. We then observe approximate Metal-Insulator Transitions (MIT) at the finite iteration steps. We also report evidence on mobility edges, which are at variance to the celebrated Aubry-André model. The dynamics near the MIT shows a critical slowing down of the ballistic group velocity in the metallic phase, similar to the divergence of the localization length in the insulating phase.
New generalized gradient approximation functionals
NASA Astrophysics Data System (ADS)
Boese, A. Daniel; Doltsinis, Nikos L.; Handy, Nicholas C.; Sprik, Michiel
2000-01-01
New generalized gradient approximation (GGA) functionals are reported, using the expansion form of A. D. Becke, J. Chem. Phys. 107, 8554 (1997), with 15 linear parameters. Our original such GGA functional, called HCTH, was determined through a least squares refinement to data of 93 systems. Here, the data are extended to 120 systems and 147 systems, introducing electron and proton affinities, and weakly bound dimers to give the new functionals HCTH/120 and HCTH/147. HCTH/120 has already been shown to give high quality predictions for weakly bound systems. The functionals are applied in a comparative study of the addition reaction of water to formaldehyde and sulfur trioxide, respectively. Furthermore, the performance of the HCTH/120 functional in Car-Parrinello molecular dynamics simulations of liquid water is encouraging.
Interplay of approximate planning strategies.
Huys, Quentin J M; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J; Dayan, Peter; Roiser, Jonathan P
2015-03-10
Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or "options." PMID:25675480
Indexing the approximate number system.
Inglis, Matthew; Gilmore, Camilla
2014-01-01
Much recent research attention has focused on understanding individual differences in the approximate number system, a cognitive system believed to underlie human mathematical competence. To date researchers have used four main indices of ANS acuity, and have typically assumed that they measure similar properties. Here we report a study which questions this assumption. We demonstrate that the numerical ratio effect has poor test-retest reliability and that it does not relate to either Weber fractions or accuracy on nonsymbolic comparison tasks. Furthermore, we show that Weber fractions follow a strongly skewed distribution and that they have lower test-retest reliability than a simple accuracy measure. We conclude by arguing that in the future researchers interested in indexing individual differences in ANS acuity should use accuracy figures, not Weber fractions or numerical ratio effects. PMID:24361686
Monotonically improving approximate answers to relational algebra queries
NASA Technical Reports Server (NTRS)
Smith, Kenneth P.; Liu, J. W. S.
1989-01-01
We present here a query processing method that produces approximate answers to queries posed in standard relational algebra. This method is monotone in the sense that the accuracy of the approximate result improves with the amount of time spent producing the result. This strategy enables us to trade the time to produce the result for the accuracy of the result. An approximate relational model that characterizes appromimate relations and a partial order for comparing them is developed. Relational operators which operate on and return approximate relations are defined.
Legendre-tau approximations for functional differential equations
NASA Technical Reports Server (NTRS)
Ito, K.; Teglas, R.
1986-01-01
The numerical approximation of solutions to linear retarded functional differential equations are considered using the so-called Legendre-tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time-differentiation. The approximate solution is then represented as a truncated Legendre series with time-varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximation is made.
Simulation of Simple Controlled Processes with Dead-Time.
ERIC Educational Resources Information Center
Watson, Keith R.; And Others
1985-01-01
The determination of closed-loop response of processes containing dead-time is typically not covered in undergraduate process control, possibly because the solution by Laplace transforms requires the use of Pade approximation for dead-time, which makes the procedure lengthy and tedious. A computer-aided method is described which simplifies the…
Waveform feature extraction based on tauberian approximation.
De Figueiredo, R J; Hu, C L
1982-02-01
A technique is presented for feature extraction of a waveform y based on its Tauberian approximation, that is, on the approximation of y by a linear combination of appropriately delayed versions of a single basis function x, i.e., y(t) = ¿M i = 1 aix(t - ¿i), where the coefficients ai and the delays ¿i are adjustable parameters. Considerations in the choice or design of the basis function x are given. The parameters ai and ¿i, i=1, . . . , M, are retrieved by application of a suitably adapted version of Prony's method to the Fourier transform of the above approximation of y. A subset of the parameters ai and ¿i, i = 1, . . . , M, is used to construct the feature vector, the value of which can be used in a classification algorithm. Application of this technique to the classification of wide bandwidth radar return signatures is presented. Computer simulations proved successful and are also discussed.
Approximation abilities of neuro-fuzzy networks
NASA Astrophysics Data System (ADS)
Mrówczyńska, Maria
2010-01-01
The paper presents the operation of two neuro-fuzzy systems of an adaptive type, intended for solving problems of the approximation of multi-variable functions in the domain of real numbers. Neuro-fuzzy systems being a combination of the methodology of artificial neural networks and fuzzy sets operate on the basis of a set of fuzzy rules "if-then", generated by means of the self-organization of data grouping and the estimation of relations between fuzzy experiment results. The article includes a description of neuro-fuzzy systems by Takaga-Sugeno-Kang (TSK) and Wang-Mendel (WM), and in order to complement the problem in question, a hierarchical structural self-organizing method of teaching a fuzzy network. A multi-layer structure of the systems is a structure analogous to the structure of "classic" neural networks. In its final part the article presents selected areas of application of neuro-fuzzy systems in the field of geodesy and surveying engineering. Numerical examples showing how the systems work concerned: the approximation of functions of several variables to be used as algorithms in the Geographic Information Systems (the approximation of a terrain model), the transformation of coordinates, and the prediction of a time series. The accuracy characteristics of the results obtained have been taken into consideration.
Interplay of approximate planning strategies
Huys, Quentin J. M.; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J.; Dayan, Peter; Roiser, Jonathan P.
2015-01-01
Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or “options.” PMID:25675480
Randomized approximate nearest neighbors algorithm.
Jones, Peter Wilcox; Osipov, Andrei; Rokhlin, Vladimir
2011-09-20
We present a randomized algorithm for the approximate nearest neighbor problem in d-dimensional Euclidean space. Given N points {x(j)} in R(d), the algorithm attempts to find k nearest neighbors for each of x(j), where k is a user-specified integer parameter. The algorithm is iterative, and its running time requirements are proportional to T·N·(d·(log d) + k·(d + log k)·(log N)) + N·k(2)·(d + log k), with T the number of iterations performed. The memory requirements of the procedure are of the order N·(d + k). A by-product of the scheme is a data structure, permitting a rapid search for the k nearest neighbors among {x(j)} for an arbitrary point x ∈ R(d). The cost of each such query is proportional to T·(d·(log d) + log(N/k)·k·(d + log k)), and the memory requirements for the requisite data structure are of the order N·(d + k) + T·(d + N). The algorithm utilizes random rotations and a basic divide-and-conquer scheme, followed by a local graph search. We analyze the scheme's behavior for certain types of distributions of {x(j)} and illustrate its performance via several numerical examples.
Approximate analytical calculations of photon geodesics in the Schwarzschild metric
NASA Astrophysics Data System (ADS)
De Falco, Vittorio; Falanga, Maurizio; Stella, Luigi
2016-10-01
We develop a method for deriving approximate analytical formulae to integrate photon geodesics in a Schwarzschild spacetime. Based on this, we derive the approximate equations for light bending and propagation delay that have been introduced empirically. We then derive for the first time an approximate analytical equation for the solid angle. We discuss the accuracy and range of applicability of the new equations and present a few simple applications of them to known astrophysical problems.
Generalized Lorentzian approximations for the Voigt line shape.
Martin, P; Puerta, J
1981-01-15
The object of the work reported in this paper was to find a simple and easy to calculate approximation to the Voigt function using the Padé method. To do this we calculated the multipole approximation to the complex function as the error function or as the plasma dispersion function. This generalized Lorentzian approximation can be used instead of the exact function in experiments that do not require great accuracy. PMID:20309100
Investigating Material Approximations in Spacecraft Radiation Analysis
NASA Technical Reports Server (NTRS)
Walker, Steven A.; Slaba, Tony C.; Clowdsley, Martha S.; Blattnig, Steve R.
2011-01-01
During the design process, the configuration of space vehicles and habitats changes frequently and the merits of design changes must be evaluated. Methods for rapidly assessing astronaut exposure are therefore required. Typically, approximations are made to simplify the geometry and speed up the evaluation of each design. In this work, the error associated with two common approximations used to simplify space radiation vehicle analyses, scaling into equivalent materials and material reordering, are investigated. Over thirty materials commonly found in spacesuits, vehicles, and human bodies are considered. Each material is placed in a material group (aluminum, polyethylene, or tissue), and the error associated with scaling and reordering was quantified for each material. Of the scaling methods investigated, range scaling is shown to be the superior method, especially for shields less than 30 g/cm2 exposed to a solar particle event. More complicated, realistic slabs are examined to quantify the separate and combined effects of using equivalent materials and reordering. The error associated with material reordering is shown to be at least comparable to, if not greater than, the error associated with range scaling. In general, scaling and reordering errors were found to grow with the difference between the average nuclear charge of the actual material and average nuclear charge of the equivalent material. Based on this result, a different set of equivalent materials (titanium, aluminum, and tissue) are substituted for the commonly used aluminum, polyethylene, and tissue. The realistic cases are scaled and reordered using the new equivalent materials, and the reduced error is shown.
Virial expansion coefficients in the harmonic approximation.
Armstrong, J R; Zinner, N T; Fedorov, D V; Jensen, A S
2012-08-01
The virial expansion method is applied within a harmonic approximation to an interacting N-body system of identical fermions. We compute the canonical partition functions for two and three particles to get the two lowest orders in the expansion. The energy spectrum is carefully interpolated to reproduce ground-state properties at low temperature and the noninteracting high-temperature limit of constant virial coefficients. This resembles the smearing of shell effects in finite systems with increasing temperature. Numerical results are discussed for the second and third virial coefficients as functions of dimension, temperature, interaction, and transition temperature between low- and high-energy limits. PMID:23005730
Shear viscosity in the postquasistatic approximation
Peralta, C.; Rosales, L.; Rodriguez-Mueller, B.; Barreto, W.
2010-05-15
We apply the postquasistatic approximation, an iterative method for the evolution of self-gravitating spheres of matter, to study the evolution of anisotropic nonadiabatic radiating and dissipative distributions in general relativity. Dissipation is described by viscosity and free-streaming radiation, assuming an equation of state to model anisotropy induced by the shear viscosity. We match the interior solution, in noncomoving coordinates, with the Vaidya exterior solution. Two simple models are presented, based on the Schwarzschild and Tolman VI solutions, in the nonadiabatic and adiabatic limit. In both cases, the eventual collapse or expansion of the distribution is mainly controlled by the anisotropy induced by the viscosity.
Approximation and modeling with ambient B-splines
NASA Astrophysics Data System (ADS)
Lehmann, N.; Maier, L.-B.; Odathuparambil, S.; Reif, U.
2016-06-01
We present a novel technique for solving approximation problems on manifolds in terms of standard tensor product B-splines. This method is easy to implement and provides optimal approximation order. Applications include the representation of smooth surfaces of arbitrary genus.
Perturbation approximation for orbits in axially symmetric funnels
NASA Astrophysics Data System (ADS)
Nauenberg, Michael
2014-11-01
A perturbation method that can be traced back to Isaac Newton is applied to obtain approximate analytic solutions for objects sliding in axially symmetric funnels in near circular orbits. Some experimental observations are presented for balls rolling in inverted cones with different opening angles, and in a funnel with a hyperbolic surface that approximately simulates the gravitational force.
How to Solve Schroedinger Problems by Approximating the Potential Function
Ledoux, Veerle; Van Daele, Marnix
2010-09-30
We give a survey over the efforts in the direction of solving the Schroedinger equation by using piecewise approximations of the potential function. Two types of approximating potentials have been considered in the literature, that is piecewise constant and piecewise linear functions. For polynomials of higher degree the approximating problem is not so easy to integrate analytically. This obstacle can be circumvented by using a perturbative approach to construct the solution of the approximating problem, leading to the so-called piecewise perturbation methods (PPM). We discuss the construction of a PPM in its most convenient form for applications and show that different PPM versions (CPM,LPM) are in fact equivalent.
Tangent plane approximation and some of its generalizations
NASA Astrophysics Data System (ADS)
Voronovich, A. G.
2007-05-01
A review of the tangent plane approximation proposed by L.M. Brekhovskikh is presented. The advantage of the tangent plane approximation over methods based on the analysis of integral equations for surface sources is emphasized. A general formula is given for the scattering amplitude of scalar plane waves under an arbitrary boundary condition. The direct generalization of the tangent plane approximation is shown to yield approximations that include a correct description of the Bragg scattering and allow one to avoid the use of a two-scale model.
Various approximations made in augmented-plane-wave calculations
NASA Astrophysics Data System (ADS)
Bacalis, N. C.; Blathras, K.; Thomaides, P.; Papaconstantopoulos, D. A.
1985-10-01
The effects of various approximations used in performing augmented-plane-wave calculations were studied for elements of the fifth and sixth columns of the Periodic Table, namely V, Nb, Ta, Cr, Mo, and W. Two kinds of approximations have been checked: (i) variation of the number of k points used to iterate to self-consistency, and (ii) approximations for the treatment of the core states. In addition a comparison between relativistic and nonrelativistic calculations is made, and an approximate method of calculating the spin-orbit splitting is given.
Analyticity of quantum states in one-dimensional tight-binding model
NASA Astrophysics Data System (ADS)
Yamada, Hiroaki S.; Ikeda, Kensuke S.
2014-09-01
Analytical complexity of quantum wavefunction whose argument is extended into the complex plane provides an important information about the potentiality of manifesting complex quantum dynamics such as time-irreversibility, dissipation and so on. We examine Pade approximation and some complementary methods to investigate the complex-analytical properties of some quantum states such as impurity states, Anderson-localized states and localized states of Harper model. The impurity states can be characterized by simple poles of the Pade approximation, and the localized states of Anderson model and Harper model can be characterized by an accumulation of poles and zeros of the Pade approximated function along a critical border, which implies a natural boundary (NB). A complementary method based on shifting the expansion-center is used to confirm the existence of the NB numerically, and it is strongly suggested that the both Anderson-localized state and localized states of Harper model have NBs in the complex extension. Moreover, we discuss an interesting relationship between our research and the natural boundary problem of the potential function whose close connection to the localization problem was discovered quite recently by some mathematicians. In addition, we examine the usefulness of the Pade approximation for numerically predicting the existence of NB by means of two typical examples, lacunary power series and random power series.
Born approximation, scattering, and algorithm
NASA Astrophysics Data System (ADS)
Martinez, Alex; Hu, Mengqi; Gu, Haicheng; Qiao, Zhijun
2015-05-01
In the past few decades, there were many imaging algorithms designed in the case of the absence of multiple scattering. Recently, we discussed an algorithm for removing high order scattering components from collected data. This paper is a continuation of our previous work. First, we investigate the current state of multiple scattering in SAR. Then, we revise our method and test it. Given an estimate of our target reflectivity, we compute the multi scattering effects in the target region for various frequencies. Furthermore, we propagate this energy through free space towards our antenna, and remove it from the collected data.
Adaptive approximation of higher order posterior statistics
Lee, Wonjung
2014-02-01
Filtering is an approach for incorporating observed data into time-evolving systems. Instead of a family of Dirac delta masses that is widely used in Monte Carlo methods, we here use the Wiener chaos expansion for the parametrization of the conditioned probability distribution to solve the nonlinear filtering problem. The Wiener chaos expansion is not the best method for uncertainty propagation without observations. Nevertheless, the projection of the system variables in a fixed polynomial basis spanning the probability space might be a competitive representation in the presence of relatively frequent observations because the Wiener chaos approach not only leads to an accurate and efficient prediction for short time uncertainty quantification, but it also allows to apply several data assimilation methods that can be used to yield a better approximate filtering solution. The aim of the present paper is to investigate this hypothesis. We answer in the affirmative for the (stochastic) Lorenz-63 system based on numerical simulations in which the uncertainty quantification method and the data assimilation method are adaptively selected by whether the dynamics is driven by Brownian motion and the near-Gaussianity of the measure to be updated, respectively.
Approximate protein structural alignment in polynomial time.
Kolodny, Rachel; Linial, Nathan
2004-08-17
Alignment of protein structures is a fundamental task in computational molecular biology. Good structural alignments can help detect distant evolutionary relationships that are hard or impossible to discern from protein sequences alone. Here, we study the structural alignment problem as a family of optimization problems and develop an approximate polynomial-time algorithm to solve them. For a commonly used scoring function, the algorithm runs in O(n(10)/epsilon(6)) time, for globular protein of length n, and it detects alignments that score within an additive error of epsilon from all optima. Thus, we prove that this task is computationally feasible, although the method that we introduce is too slow to be a useful everyday tool. We argue that such approximate solutions are, in fact, of greater interest than exact ones because of the noisy nature of experimentally determined protein coordinates. The measurement of similarity between a pair of protein structures used by our algorithm involves the Euclidean distance between the structures (appropriately rigidly transformed). We show that an alternative approach, which relies on internal distance matrices, must incorporate sophisticated geometric ingredients if it is to guarantee optimality and run in polynomial time. We use these observations to visualize the scoring function for several real instances of the problem. Our investigations yield insights on the computational complexity of protein alignment under various scoring functions. These insights can be used in the design of scoring functions for which the optimum can be approximated efficiently and perhaps in the development of efficient algorithms for the multiple structural alignment problem. PMID:15304646
Pitch contour stylization using an optimal piecewise polynomial approximation
Ghosh, Prasanta Kumar; Narayanan, Shrikanth S.
2014-01-01
We propose a dynamic programming (DP) based piecewise polynomial approximation of discrete data such that the L2 norm of the approximation error is minimized. We apply this technique for the stylization of speech pitch contour. Objective evaluation verifies that the DP based technique indeed yields minimum mean square error (MSE) compared to other approximation methods. Subjective evaluation reveals that the quality of the synthesized speech using stylized pitch contour obtained by the DP method is almost identical to that of the original speech. PMID:24453471
Analytic Approximate Solution for Falkner-Skan Equation
Marinca, Bogdan
2014-01-01
This paper deals with the Falkner-Skan nonlinear differential equation. An analytic approximate technique, namely, optimal homotopy asymptotic method (OHAM), is employed to propose a procedure to solve a boundary-layer problem. Our method does not depend upon small parameters and provides us with a convenient way to optimally control the convergence of the approximate solutions. The obtained results reveal that this procedure is very effective, simple, and accurate. A very good agreement was found between our approximate results and numerical solutions, which prove that OHAM is very efficient in practice, ensuring a very rapid convergence after only one iteration. PMID:24883417
Variational extensions of the mean spherical approximation
NASA Astrophysics Data System (ADS)
Blum, L.; Ubriaco, M.
2000-04-01
In a previous work we have proposed a method to study complex systems with objects of arbitrary size. For certain specific forms of the atomic and molecular interactions, surprisingly simple and accurate theories (The Variational Mean Spherical Scaling Approximation, VMSSA) [(Velazquez, Blum J. Chem. Phys. 110 (1990) 10 931; Blum, Velazquez, J. Quantum Chem. (Theochem), in press)] can be obtained. The basic idea is that if the interactions can be expressed in a rapidly converging sum of (complex) exponentials, then the Ornstein-Zernike equation (OZ) has an analytical solution. This analytical solution is used to construct a robust interpolation scheme, the variation mean spherical scaling approximation (VMSSA). The Helmholtz excess free energy Δ A=Δ E- TΔ S is then written as a function of a scaling matrix Γ. Both the excess energy Δ E( Γ) and the excess entropy Δ S( Γ) will be functionals of Γ. In previous work of this series the form of this functional was found for the two- (Blum, Herrera, Mol. Phys. 96 (1999) 821) and three-exponential closures of the OZ equation (Blum, J. Stat. Phys., submitted for publication). In this paper we extend this to M Yukawas, a complete basis set: We obtain a solution for the one-component case and give a closed-form expression for the MSA excess entropy, which is also the VMSSA entropy.
Function approximation using adaptive and overlapping intervals
Patil, R.B.
1995-05-01
A problem common to many disciplines is to approximate a function given only the values of the function at various points in input variable space. A method is proposed for approximating a function of several to one variable. The model takes the form of weighted averaging of overlapping basis functions defined over intervals. The number of such basis functions and their parameters (widths and centers) are automatically determined using given training data and a learning algorithm. The proposed algorithm can be seen as placing a nonuniform multidimensional grid in the input domain with overlapping cells. The non-uniformity and overlap of the cells is achieved by a learning algorithm to optimize a given objective function. This approach is motivated by the fuzzy modeling approach and a learning algorithms used for clustering and classification in pattern recognition. The basics of why and how the approach works are given. Few examples of nonlinear regression and classification are modeled. The relationship between the proposed technique, radial basis neural networks, kernel regression, probabilistic neural networks, and fuzzy modeling is explained. Finally advantages and disadvantages are discussed.
On some applications of diophantine approximations
Chudnovsky, G. V.
1984-01-01
Siegel's results [Siegel, C. L. (1929) Abh. Preuss. Akad. Wiss. Phys.-Math. Kl. 1] on the transcendence and algebraic independence of values of E-functions are refined to obtain the best possible bound for the measures of irrationality and linear independence of values of arbitrary E-functions at rational points. Our results show that values of E-functions at rational points have measures of diophantine approximations typical to “almost all” numbers. In particular, any such number has the “2 + ε” exponent of irrationality: ǀΘ - p/qǀ > ǀqǀ-2-ε for relatively prime rational integers p,q, with q ≥ q0 (Θ, ε). These results answer some problems posed by Lang. The methods used here are based on the introduction of graded Padé approximations to systems of functions satisfying linear differential equations with rational function coefficients. The constructions and proofs of this paper were used in the functional (nonarithmetic case) in a previous paper [Chudnovsky, D. V. & Chudnovsky, G. V. (1983) Proc. Natl. Acad. Sci. USA 80, 5158-5162]. PMID:16593441
Impact of inflow transport approximation on light water reactor analysis
NASA Astrophysics Data System (ADS)
Choi, Sooyoung; Smith, Kord; Lee, Hyun Chul; Lee, Deokjung
2015-10-01
The impact of the inflow transport approximation on light water reactor analysis is investigated, and it is verified that the inflow transport approximation significantly improves the accuracy of the transport and transport/diffusion solutions. A methodology for an inflow transport approximation is implemented in order to generate an accurate transport cross section. The inflow transport approximation is compared to the conventional methods, which are the consistent-PN and the outflow transport approximations. The three transport approximations are implemented in the lattice physics code STREAM, and verification is performed for various verification problems in order to investigate their effects and accuracy. From the verification, it is noted that the consistent-PN and the outflow transport approximations cause significant error in calculating the eigenvalue and the power distribution. The inflow transport approximation shows very accurate and precise results for the verification problems. The inflow transport approximation shows significant improvements not only for the high leakage problem but also for practical large core problem analyses.
Saddlepoint distribution function approximations in biostatistical inference.
Kolassa, J E
2003-01-01
Applications of saddlepoint approximations to distribution functions are reviewed. Calculations are provided for marginal distributions and conditional distributions. These approximations are applied to problems of testing and generating confidence intervals, particularly in canonical exponential families.
Razafinjanahary, H.; Rogemond, F.; Chermette, H.
1994-08-15
The MS-LSD method remains a method of interest when rapidity and small computer resources are required; its main drawback is some lack of accuracy, mainly due to the muffin-tin distribution of the potential. In the case of large clusters or molecules, the use of an empty sphere to fill, in part, the large intersphere region can improve greatly the results. Calculations bearing on C{sub 60} has been undertaken to underline this trend, because, on the one hand, the fullerenes exhibit a remarkable possibility to fit a large empty sphere in the center of the cluster and, on the other hand, numerous accurate calculations have already been published, allowing quantitative comparison with results. The author`s calculations suggest that in case of added empty sphere the results compare well with the results of more accurate calculations. The calculated electron affinity for C{sub 60} and C{sub 60}{sup {minus}} are in reasonable agreement with experimental values, but the stability of C{sub 60}{sup 2-} in gas phase is not found. 35 refs., 3 figs., 5 tabs.
Animal models and integrated nested Laplace approximations.
Holand, Anna Marie; Steinsland, Ingelin; Martino, Sara; Jensen, Henrik
2013-08-07
Animal models are generalized linear mixed models used in evolutionary biology and animal breeding to identify the genetic part of traits. Integrated Nested Laplace Approximation (INLA) is a methodology for making fast, nonsampling-based Bayesian inference for hierarchical Gaussian Markov models. In this article, we demonstrate that the INLA methodology can be used for many versions of Bayesian animal models. We analyze animal models for both synthetic case studies and house sparrow (Passer domesticus) population case studies with Gaussian, binomial, and Poisson likelihoods using INLA. Inference results are compared with results using Markov Chain Monte Carlo methods. For model choice we use difference in deviance information criteria (DIC). We suggest and show how to evaluate differences in DIC by comparing them with sampling results from simulation studies. We also introduce an R package, AnimalINLA, for easy and fast inference for Bayesian Animal models using INLA.
Approximate truncation robust computed tomography—ATRACT
NASA Astrophysics Data System (ADS)
Dennerlein, Frank; Maier, Andreas
2013-09-01
We present an approximate truncation robust algorithm to compute tomographic images (ATRACT). This algorithm targets at reconstructing volumetric images from cone-beam projections in scenarios where these projections are highly truncated in each dimension. It thus facilitates reconstructions of small subvolumes of interest, without involving prior knowledge about the object. Our method is readily applicable to medical C-arm imaging, where it may contribute to new clinical workflows together with a considerable reduction of x-ray dose. We give a detailed derivation of ATRACT that starts from the conventional Feldkamp filtered-backprojection algorithm and that involves, as one component, a novel original formula for the inversion of the two-dimensional Radon transform. Discretization and numerical implementation are discussed and reconstruction results from both, simulated projections and first clinical data sets are presented.
Collective pairing Hamiltonian in the GCM approximation
NASA Astrophysics Data System (ADS)
Góźdź, A.; Pomorski, K.; Brack, M.; Werner, E.
1985-08-01
Using the generator coordinate method and the gaussian overlap approximation we derived the collective Schrödinger-type equation starting from a microscopic single-particle plus pairing hamiltonian for one kind of particle. The BCS wave function was used as the generator function. The pairing energy-gap parameter Δ and the gauge transformation anglewere taken as the generator coordinates. Numerical results have been obtained for the full and the mean-field pairing hamiltonians and compared with the cranking estimates. A significant role played by the zero-point energy correction in the collective pairing potential is found. The ground-state energy dependence on the pairing strength agrees very well with the exact solution of the Richardson model for a set of equidistant doubly-degenerate single-particle levels.
Improved approximations for control augmented structural synthesis
NASA Technical Reports Server (NTRS)
Thomas, H. L.; Schmit, L. A.
1990-01-01
A methodology for control-augmented structural synthesis is presented for structure-control systems which can be modeled as an assemblage of beam, truss, and nonstructural mass elements augmented by a noncollocated direct output feedback control system. Truss areas, beam cross sectional dimensions, nonstructural masses and rotary inertias, and controller position and velocity gains are treated simultaneously as design variables. The structural mass and a control-system performance index can be minimized simultaneously, with design constraints placed on static stresses and displacements, dynamic harmonic displacements and forces, structural frequencies, and closed-loop eigenvalues and damping ratios. Intermediate design-variable and response-quantity concepts are used to generate new approximations for displacements and actuator forces under harmonic dynamic loads and for system complex eigenvalues. This improves the overall efficiency of the procedure by reducing the number of complete analyses required for convergence. Numerical results which illustrate the effectiveness of the method are given.
Turbo Equalization Using Partial Gaussian Approximation
NASA Astrophysics Data System (ADS)
Zhang, Chuanzong; Wang, Zhongyong; Manchon, Carles Navarro; Sun, Peng; Guo, Qinghua; Fleury, Bernard Henri
2016-09-01
This paper deals with turbo-equalization for coded data transmission over intersymbol interference (ISI) channels. We propose a message-passing algorithm that uses the expectation-propagation rule to convert messages passed from the demodulator-decoder to the equalizer and computes messages returned by the equalizer by using a partial Gaussian approximation (PGA). Results from Monte Carlo simulations show that this approach leads to a significant performance improvement compared to state-of-the-art turbo-equalizers and allows for trading performance with complexity. We exploit the specific structure of the ISI channel model to significantly reduce the complexity of the PGA compared to that considered in the initial paper proposing the method.
Robust Generalized Low Rank Approximations of Matrices.
Shi, Jiarong; Yang, Wei; Zheng, Xiuyun
2015-01-01
In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLRAM) claims its superiority on computation time and compression ratio over the SVD. However, GLRAM is very sensitive to sparse large noise or outliers and its robust version does not have been explored or solved yet. To address this problem, this paper proposes a robust method for GLRAM, named Robust GLRAM (RGLRAM). We first formulate RGLRAM as an l1-norm optimization problem which minimizes the l1-norm of the approximation errors. Secondly, we apply the technique of Augmented Lagrange Multipliers (ALM) to solve this l1-norm minimization problem and derive a corresponding iterative scheme. Then the weak convergence of the proposed algorithm is discussed under mild conditions. Next, we investigate a special case of RGLRAM and extend RGLRAM to a general tensor case. Finally, the extensive experiments on synthetic data show that it is possible for RGLRAM to exactly recover both the low rank and the sparse components while it may be difficult for previous state-of-the-art algorithms. We also discuss three issues on RGLRAM: the sensitivity to initialization, the generalization ability and the relationship between the running time and the size/number of matrices. Moreover, the experimental results on images of faces with large corruptions illustrate that RGLRAM obtains the best denoising and compression performance than other methods. PMID:26367116
Robust Generalized Low Rank Approximations of Matrices
Shi, Jiarong; Yang, Wei; Zheng, Xiuyun
2015-01-01
In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLRAM) claims its superiority on computation time and compression ratio over the SVD. However, GLRAM is very sensitive to sparse large noise or outliers and its robust version does not have been explored or solved yet. To address this problem, this paper proposes a robust method for GLRAM, named Robust GLRAM (RGLRAM). We first formulate RGLRAM as an l1-norm optimization problem which minimizes the l1-norm of the approximation errors. Secondly, we apply the technique of Augmented Lagrange Multipliers (ALM) to solve this l1-norm minimization problem and derive a corresponding iterative scheme. Then the weak convergence of the proposed algorithm is discussed under mild conditions. Next, we investigate a special case of RGLRAM and extend RGLRAM to a general tensor case. Finally, the extensive experiments on synthetic data show that it is possible for RGLRAM to exactly recover both the low rank and the sparse components while it may be difficult for previous state-of-the-art algorithms. We also discuss three issues on RGLRAM: the sensitivity to initialization, the generalization ability and the relationship between the running time and the size/number of matrices. Moreover, the experimental results on images of faces with large corruptions illustrate that RGLRAM obtains the best denoising and compression performance than other methods. PMID:26367116
NASA Astrophysics Data System (ADS)
Pratiwi, B. N.; Suparmi, A.; Cari, C.; Husein, A. S.; Yunianto, M.
2016-08-01
We apllied asymptotic iteration method (AIM) to obtain the analytical solution of the Dirac equation in case exact pseudospin symmetry in the presence of modified Pcischl- Teller potential and trigonometric Scarf II non-central potential. The Dirac equation was solved by variables separation into one dimensional Dirac equation, the radial part and angular part equation. The radial and angular part equation can be reduced into hypergeometric type equation by variable substitution and wavefunction substitution and then transform it into AIM type equation to obtain relativistic energy eigenvalue and wavefunctions. Relativistic energy was calculated numerically by Matlab software. And then relativistic energy spectrum and wavefunctions were visualized by Matlab software. The results show that the increase in the radial quantum number nr causes decrease in the relativistic energy spectrum. The negative value of energy is taken due to the pseudospin symmetry limit. Several quantum wavefunctions were presented in terms of the hypergeometric functions.
Approximate Confidence Interval for Difference of Fit in Structural Equation Models.
ERIC Educational Resources Information Center
Raykov, Tenko
2001-01-01
Discusses a method, based on bootstrap methodology, for obtaining an approximate confidence interval for the difference in root mean square error of approximation of two structural equation models. Illustrates the method using a numerical example. (SLD)
NASA Technical Reports Server (NTRS)
Ito, Kazufumi; Teglas, Russell
1987-01-01
The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.
A unified approach to the Darwin approximation
Krause, Todd B.; Apte, A.; Morrison, P. J.
2007-10-15
There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting.
Integral approximations to classical diffusion and smoothed particle hydrodynamics
Du, Qiang; Lehoucq, R. B.; Tartakovsky, A. M.
2014-12-31
The contribution of the paper is the approximation of a classical diffusion operator by an integral equation with a volume constraint. A particular focus is on classical diffusion problems associated with Neumann boundary conditions. By exploiting this approximation, we can also approximate other quantities such as the flux out of a domain. Our analysis of the model equation on the continuum level is closely related to the recent work on nonlocal diffusion and peridynamic mechanics. In particular, we elucidate the role of a volumetric constraint as an approximation to a classical Neumann boundary condition in the presence of physical boundary. The volume-constrained integral equation then provides the basis for accurate and robust discretization methods. As a result, an immediate application is to the understanding and improvement of the Smoothed Particle Hydrodynamics (SPH) method.
Integral approximations to classical diffusion and smoothed particle hydrodynamics
Du, Qiang; Lehoucq, R. B.; Tartakovsky, A. M.
2014-12-31
The contribution of the paper is the approximation of a classical diffusion operator by an integral equation with a volume constraint. A particular focus is on classical diffusion problems associated with Neumann boundary conditions. By exploiting this approximation, we can also approximate other quantities such as the flux out of a domain. Our analysis of the model equation on the continuum level is closely related to the recent work on nonlocal diffusion and peridynamic mechanics. In particular, we elucidate the role of a volumetric constraint as an approximation to a classical Neumann boundary condition in the presence of physical boundary.more » The volume-constrained integral equation then provides the basis for accurate and robust discretization methods. As a result, an immediate application is to the understanding and improvement of the Smoothed Particle Hydrodynamics (SPH) method.« less
Integral approximations to classical diffusion and smoothed particle hydrodynamics
Du, Q.; Lehoucq, Richard B.; Tartakovsky, Alexandre M.
2015-04-01
The contribution of the paper is the approximation of a classical diffusion operator by an integral equation with a volume constraint. A particular focus is on classical diffusion problems associated with Neumann boundary conditions. By exploiting this approximation, we can also approximate other quantities such as the flux out of a domain. Our analysis of the model equation on the continuum level is closely related to the recent work on nonlocal diffusion and peridynamic mechanics. In particular, we elucidate the role of a volumetric constraint as an approximation to a classical Neumann boundary condition in the presence of physical boundary. The volume-constrained integral equation then provides the basis for accurate and robust discretization methods. An immediate application is to the understanding and improvement of the Smoothed Particle Hydrodynamics (SPH) method.
Approximate polynomial preconditioning applied to biharmonic equations on vector supercomputers
NASA Technical Reports Server (NTRS)
Wong, Yau Shu; Jiang, Hong
1987-01-01
Applying a finite difference approximation to a biharmonic equation results in a very ill-conditioned system of equations. This paper examines the conjugate gradient method used in conjunction with the generalized and approximate polynomial preconditionings for solving such linear systems. An approximate polynomial preconditioning is introduced, and is shown to be more efficient than the generalized polynomial preconditionings. This new technique provides a simple but effective preconditioning polynomial, which is based on another coefficient matrix rather than the original matrix operator as commonly used.
On the Beebe-Linderberg two-electron integral approximation
NASA Astrophysics Data System (ADS)
Røeggen, I.; Wisløff-Nilssen, E.
1986-12-01
The Beebe-Linderberg two-electron integral approximation, which is generated by a Cholesky decomposition of the two-electron integral matrix ([μν|λσ]), is slightly modified. On the basis of test calculations, two key questions concerning this approximation are discussed: The numerical rank of the two-electron integral matrix and the relationship between the integral threshold and electronic properties. The numerical results presented in this work suggest that the modified Beebe-Linderberg approximation might be considered as an alternative to effective core potential methods.
Quadrupole Collective Inertia in Nuclear Fission: Cranking Approximation
Baran, A.; Sheikh, J. A.; Dobaczewski, J.; Nazarewicz, Witold
2011-01-01
Collective mass tensor derived from the cranking approximation to the adiabatic time-dependent Hartree-Fock-Bogoliubov (ATDHFB) approach is compared with that obtained in the Gaussian Overlap Approximation (GOA) to the generator coordinate method. Illustrative calculations are carried out for one-dimensional quadrupole fission pathways in ^{256}Fm. It is shown that the collective mass exhibits strong variations with the quadrupole collective coordinate. These variations are related to the changes in the intrinsic shell structure. The differences between collective inertia obtained in cranking and perturbative cranking approximations to ATDHFB, and within GOA, are discussed.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Larson, Mats G.
2000-01-01
We consider a posteriori error estimates for finite volume and finite element methods on arbitrary meshes subject to prescribed error functionals. Error estimates of this type are useful in a number of computational settings: (1) quantitative prediction of the numerical solution error, (2) adaptive meshing, and (3) load balancing of work on parallel computing architectures. Our analysis recasts the class of Godunov finite volumes schemes as a particular form of discontinuous Galerkin method utilizing broken space approximation obtained via reconstruction of cell-averaged data. In this general framework, weighted residual error bounds are readily obtained using duality arguments and Galerkin orthogonality. Additional consideration is given to issues such as nonlinearity, efficiency, and the relationship to other existing methods. Numerical examples are given throughout the talk to demonstrate the sharpness of the estimates and efficiency of the techniques. Additional information is contained in the original.
Approximated maximum likelihood estimation in multifractal random walks
NASA Astrophysics Data System (ADS)
Løvsletten, O.; Rypdal, M.
2012-04-01
We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry , Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.64.026103 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the r computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.
Approximate Killing vectors on S{sup 2}
Cook, Gregory B.; Whiting, Bernard F.
2007-08-15
We present a new method for computing the best approximation to a Killing vector on closed 2-surfaces that are topologically S{sup 2}. When solutions of Killing's equation do not exist, this method is shown to yield results superior to those produced by existing methods. In addition, this method appears to provide a new tool for studying the horizon geometry of distorted black holes.
Bent approximations to synchrotron radiation optics
Heald, S.
1981-01-01
Ideal optical elements can be approximated by bending flats or cylinders. This paper considers the applications of these approximate optics to synchrotron radiation. Analytic and raytracing studies are used to compare their optical performance with the corresponding ideal elements. It is found that for many applications the performance is adequate, with the additional advantages of lower cost and greater flexibility. Particular emphasis is placed on obtaining the practical limitations on the use of the approximate elements in typical beamline configurations. Also considered are the possibilities for approximating very long length mirrors using segmented mirrors.
Least squares approximation of two-dimensional FIR digital filters
NASA Astrophysics Data System (ADS)
Alliney, S.; Sgallari, F.
1980-02-01
In this paper, a new method for the synthesis of two-dimensional FIR digital filters is presented. The method is based on a least-squares approximation of the ideal frequency response; an orthogonality property of certain functions, related to the frequency sampling design, improves the computational efficiency.
Approximation of reliability of direct genomic breeding values
Technology Transfer Automated Retrieval System (TEKTRAN)
Two methods to efficiently approximate theoretical genomic reliabilities are presented. The first method is based on the direct inverse of the left hand side (LHS) of mixed model equations. It uses the genomic relationship matrix for a small subset of individuals with the highest genomic relationshi...
Cophylogeny reconstruction via an approximate Bayesian computation.
Baudet, C; Donati, B; Sinaimeri, B; Crescenzi, P; Gautier, C; Matias, C; Sagot, M-F
2015-05-01
Despite an increasingly vast literature on cophylogenetic reconstructions for studying host-parasite associations, understanding the common evolutionary history of such systems remains a problem that is far from being solved. Most algorithms for host-parasite reconciliation use an event-based model, where the events include in general (a subset of) cospeciation, duplication, loss, and host switch. All known parsimonious event-based methods then assign a cost to each type of event in order to find a reconstruction of minimum cost. The main problem with this approach is that the cost of the events strongly influences the reconciliation obtained. Some earlier approaches attempt to avoid this problem by finding a Pareto set of solutions and hence by considering event costs under some minimization constraints. To deal with this problem, we developed an algorithm, called Coala, for estimating the frequency of the events based on an approximate Bayesian computation approach. The benefits of this method are 2-fold: (i) it provides more confidence in the set of costs to be used in a reconciliation, and (ii) it allows estimation of the frequency of the events in cases where the data set consists of trees with a large number of taxa. We evaluate our method on simulated and on biological data sets. We show that in both cases, for the same pair of host and parasite trees, different sets of frequencies for the events lead to equally probable solutions. Moreover, often these solutions differ greatly in terms of the number of inferred events. It appears crucial to take this into account before attempting any further biological interpretation of such reconciliations. More generally, we also show that the set of frequencies can vary widely depending on the input host and parasite trees. Indiscriminately applying a standard vector of costs may thus not be a good strategy. PMID:25540454
Quirks of Stirling's Approximation
ERIC Educational Resources Information Center
Macrae, Roderick M.; Allgeier, Benjamin M.
2013-01-01
Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…
Spline approximations for nonlinear hereditary control systems
NASA Technical Reports Server (NTRS)
Daniel, P. L.
1982-01-01
A sline-based approximation scheme is discussed for optimal control problems governed by nonlinear nonautonomous delay differential equations. The approximating framework reduces the original control problem to a sequence of optimization problems governed by ordinary differential equations. Convergence proofs, which appeal directly to dissipative-type estimates for the underlying nonlinear operator, are given and numerical findings are summarized.
An approximate model for pulsar navigation simulation
NASA Astrophysics Data System (ADS)
Jovanovic, Ilija; Enright, John
2016-02-01
This paper presents an approximate model for the simulation of pulsar aided navigation systems. High fidelity simulations of these systems are computationally intensive and impractical for simulating periods of a day or more. Simulation of yearlong missions is done by abstracting navigation errors as periodic Gaussian noise injections. This paper presents an intermediary approximate model to simulate position errors for periods of several weeks, useful for building more accurate Gaussian error models. This is done by abstracting photon detection and binning, replacing it with a simple deterministic process. The approximate model enables faster computation of error injection models, allowing the error model to be inexpensively updated throughout a simulation. Testing of the approximate model revealed an optimistic performance prediction for non-millisecond pulsars with more accurate predictions for pulsars in the millisecond spectrum. This performance gap was attributed to noise which is not present in the approximate model but can be predicted and added to improve accuracy.
Network histograms and universality of blockmodel approximation
Olhede, Sofia C.; Wolfe, Patrick J.
2014-01-01
In this paper we introduce the network histogram, a statistical summary of network interactions to be used as a tool for exploratory data analysis. A network histogram is obtained by fitting a stochastic blockmodel to a single observation of a network dataset. Blocks of edges play the role of histogram bins and community sizes that of histogram bandwidths or bin sizes. Just as standard histograms allow for varying bandwidths, different blockmodel estimates can all be considered valid representations of an underlying probability model, subject to bandwidth constraints. Here we provide methods for automatic bandwidth selection, by which the network histogram approximates the generating mechanism that gives rise to exchangeable random graphs. This makes the blockmodel a universal network representation for unlabeled graphs. With this insight, we discuss the interpretation of network communities in light of the fact that many different community assignments can all give an equally valid representation of such a network. To demonstrate the fidelity-versus-interpretability tradeoff inherent in considering different numbers and sizes of communities, we analyze two publicly available networks—political weblogs and student friendships—and discuss how to interpret the network histogram when additional information related to node and edge labeling is present. PMID:25275010
The time-dependent Gutzwiller approximation
NASA Astrophysics Data System (ADS)
Fabrizio, Michele
2015-03-01
The time-dependent Gutzwiller Approximation (t-GA) is shown to be capable of tracking the off-equilibrium evolution both of coherent quasiparticles and of incoherent Hubbard bands. The method is used to demonstrate that the sharp dynamical crossover observed by time-dependent DMFT in the quench-dynamics of a half-filled Hubbard model can be identified within the t-GA as a genuine dynamical transition separating two distinct physical phases. This result, strictly variational for lattices of infinite coordination number, is intriguing as it actually questions the occurrence of thermalization. Next, we shall present how t-GA works in a multi-band model for V2O3 that displays a first-order Mott transition. We shall show that a physically accessible excitation pathway is able to collapse the Mott gap down and drive off-equilibrium the insulator into a metastable metal phase. Work supported by the European Union, Seventh Framework Programme, under the project GO FAST, Grant Agreement No. 280555.
Approximate algorithms for partitioning and assignment problems
NASA Technical Reports Server (NTRS)
Iqbal, M. A.
1986-01-01
The problem of optimally assigning the modules of a parallel/pipelined program over the processors of a multiple computer system under certain restrictions on the interconnection structure of the program as well as the multiple computer system was considered. For a variety of such programs it is possible to find linear time if a partition of the program exists in which the load on any processor is within a certain bound. This method, when combined with a binary search over a finite range, provides an approximate solution to the partitioning problem. The specific problems considered were: a chain structured parallel program over a chain-like computer system, multiple chain-like programs over a host-satellite system, and a tree structured parallel program over a host-satellite system. For a problem with m modules and n processors, the complexity of the algorithm is no worse than O(mnlog(W sub T/epsilon)), where W sub T is the cost of assigning all modules to one processor and epsilon the desired accuracy.
Rainbows: Mie computations and the Airy approximation.
Wang, R T; van de Hulst, H C
1991-01-01
Efficient and accurate computation of the scattered intensity pattern by the Mie formulas is now feasible for size parameters up to x = 50,000 at least, which in visual light means spherical drops with diameters up to 6 mm. We present a method for evaluating the Mie coefficients from the ratios between Riccati-Bessel and Neumann functions of successive order. We probe the applicability of the Airy approximation, which we generalize to rainbows of arbitrary p (number of internal reflections = p - 1), by comparing the Mie and Airy intensity patterns. Millimeter size water drops show a match in all details, including the position and intensity of the supernumerary maxima and the polarization. A fairly good match is still seen for drops of 0.1 mm. A small spread in sizes helps to smooth out irrelevant detail. The dark band between the rainbows is used to test more subtle features. We conclude that this band contains not only externally reflected light (p = 0) but also a sizable contribution f rom the p = 6 and p = 7 rainbows, which shift rapidly with wavelength. The higher the refractive index, the closer both theories agree on the first primary rainbow (p = 2) peak for drop diameters as small as 0.02 mm. This may be useful in supporting experimental work. PMID:20581954
Simultaneous Approximation to Real and p-adic Numbers
NASA Astrophysics Data System (ADS)
Zelo, Dmitrij
2009-02-01
We study the problem of simultaneous approximation to a fixed family of real and p-adic numbers by roots of integer polynomials of restricted type. The method that we use for this purpose was developed by H. Davenport and W.M. Schmidt in their study of approximation to real numbers by algebraic integers. This method based on Mahler's Duality requires to study the dual problem of approximation to successive powers of these numbers by rational numbers with the same denominators. Dirichlet's Box Principle provides estimates for such approximations but one can do better. In this thesis we establish constraints on how much better one can do when dealing with the numbers and their squares. We also construct examples showing that at least in some instances these constraints are optimal. Going back to the original problem, we obtain estimates for simultaneous approximation to real and p-adic numbers by roots of integer polynomials of degree 3 or 4 with fixed coefficients in degree at least 3. In the case of a single real number (and no p-adic numbers), we extend work of D. Roy by showing that the square of the golden ratio is the optimal exponent of approximation by algebraic numbers of degree 4 with bounded denominator and trace.
Computing gap free Pareto front approximations with stochastic search algorithms.
Schütze, Oliver; Laumanns, Marco; Tantar, Emilia; Coello, Carlos A Coello; Talbi, El-Ghazali
2010-01-01
Recently, a convergence proof of stochastic search algorithms toward finite size Pareto set approximations of continuous multi-objective optimization problems has been given. The focus was on obtaining a finite approximation that captures the entire solution set in some suitable sense, which was defined by the concept of epsilon-dominance. Though bounds on the quality of the limit approximation-which are entirely determined by the archiving strategy and the value of epsilon-have been obtained, the strategies do not guarantee to obtain a gap free approximation of the Pareto front. That is, such approximations A can reveal gaps in the sense that points f in the Pareto front can exist such that the distance of f to any image point F(a), a epsilon A, is "large." Since such gap free approximations are desirable in certain applications, and the related archiving strategies can be advantageous when memetic strategies are included in the search process, we are aiming in this work for such methods. We present two novel strategies that accomplish this task in the probabilistic sense and under mild assumptions on the stochastic search algorithm. In addition to the convergence proofs, we give some numerical results to visualize the behavior of the different archiving strategies. Finally, we demonstrate the potential for a possible hybridization of a given stochastic search algorithm with a particular local search strategy-multi-objective continuation methods-by showing that the concept of epsilon-dominance can be integrated into this approach in a suitable way.
Approximate Riemann solvers for the Godunov SPH (GSPH)
NASA Astrophysics Data System (ADS)
Puri, Kunal; Ramachandran, Prabhu
2014-08-01
The Godunov Smoothed Particle Hydrodynamics (GSPH) method is coupled with non-iterative, approximate Riemann solvers for solutions to the compressible Euler equations. The use of approximate solvers avoids the expensive solution of the non-linear Riemann problem for every interacting particle pair, as required by GSPH. In addition, we establish an equivalence between the dissipative terms of GSPH and the signal based SPH artificial viscosity, under the restriction of a class of approximate Riemann solvers. This equivalence is used to explain the anomalous “wall heating” experienced by GSPH and we provide some suggestions to overcome it. Numerical tests in one and two dimensions are used to validate the proposed Riemann solvers. A general SPH pairing instability is observed for two-dimensional problems when using unequal mass particles. In general, Ducowicz Roe's and HLLC approximate Riemann solvers are found to be suitable replacements for the iterative Riemann solver in the original GSPH scheme.
Model reduction using new optimal Routh approximant technique
NASA Technical Reports Server (NTRS)
Hwang, Chyi; Guo, Tong-Yi; Sheih, Leang-San
1992-01-01
An optimal Routh approximant of a single-input single-output dynamic system is a reduced-order transfer function of which the denominator is obtained by the Routh approximation method while the numerator is determined by minimizing a time-response integral-squared-error (ISE) criterion. In this paper, a new elegant approach is presented for obtaining the optimal Routh approximants for linear time-invariant continuous-time systems. The approach is based on the Routh canonical expansion, which is a finite-term orthogonal series of rational basis functions, and minimization of the ISE criterion. A procedure for combining the above approach with the bilinear transformation is also presented in order to obtain the optimal bilinear Routh approximants of linear time-invariant discrete-time systems. The proposed technique is simple in formulation and is amenable to practical implementation.
Quantum instanton approximation for thermal rate constants of chemical reactions
NASA Astrophysics Data System (ADS)
Miller, William H.; Zhao, Yi; Ceotto, Michele; Yang, Sandy
2003-07-01
A quantum mechanical theory for chemical reaction rates is presented which is modeled after the [semiclassical (SC)] instanton approximation. It incorporates the desirable aspects of the instanton picture, which involves only properties of the (SC approximation to the) Boltzmann operator, but corrects its quantitative deficiencies by replacing the SC approximation for the Boltzmann operator by the quantum Boltzmann operator, exp(-βĤ). Since a calculation of the quantum Boltzmann operator is feasible for quite complex molecular systems (by Monte Carlo path integral methods), having an accurate rate theory that involves only the Boltzmann operator could be quite useful. The application of this quantum instanton approximation to several one- and two-dimensional model problems illustrates its potential; e.g., it is able to describe thermal rate constants accurately (˜10-20% error) from high to low temperatures deep in the tunneling regime, and applies equally well to asymmetric and symmetric potentials.
Structural Reliability Analysis and Optimization: Use of Approximations
NASA Technical Reports Server (NTRS)
Grandhi, Ramana V.; Wang, Liping
1999-01-01
This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different
Superfluidity of heated Fermi systems in the static fluctuation approximation
Khamzin, A. A.; Nikitin, A. S.; Sitdikov, A. S.
2015-10-15
Superfluidity properties of heated finite Fermi systems are studied in the static fluctuation approximation, which is an original method. This method relies on a single and controlled approximation, which permits taking correctly into account quasiparticle correlations and thereby going beyond the independent-quasiparticle model. A closed self-consistent set of equations for calculating correlation functions at finite temperature is obtained for a finite Fermi system described by the Bardeen–Cooper–Schrieffer Hamiltonian. An equation for the energy gap is found with allowance for fluctuation effects. It is shown that the phase transition to the supefluid state is smeared upon the inclusion of fluctuations.
Kernel approximation for solving few-body integral equations
NASA Astrophysics Data System (ADS)
Christie, I.; Eyre, D.
1986-06-01
This paper investigates an approximate method for solving integral equations that arise in few-body problems. The method is to replace the kernel by a degenerate kernel defined on a finite dimensional subspace of piecewise Lagrange polynomials. Numerical accuracy of the method is tested by solving the two-body Lippmann-Schwinger equation with non-separable potentials, and the three-body Amado-Lovelace equation with separable two-body potentials.
Frankenstein's glue: transition functions for approximate solutions
NASA Astrophysics Data System (ADS)
Yunes, Nicolás
2007-09-01
Approximations are commonly employed to find approximate solutions to the Einstein equations. These solutions, however, are usually only valid in some specific spacetime region. A global solution can be constructed by gluing approximate solutions together, but this procedure is difficult because discontinuities can arise, leading to large violations of the Einstein equations. In this paper, we provide an attempt to formalize this gluing scheme by studying transition functions that join approximate analytic solutions together. In particular, we propose certain sufficient conditions on these functions and prove that these conditions guarantee that the joined solution still satisfies the Einstein equations analytically to the same order as the approximate ones. An example is also provided for a binary system of non-spinning black holes, where the approximate solutions are taken to be given by a post-Newtonian expansion and a perturbed Schwarzschild solution. For this specific case, we show that if the transition functions satisfy the proposed conditions, then the joined solution does not contain any violations to the Einstein equations larger than those already inherent in the approximations. We further show that if these functions violate the proposed conditions, then the matter content of the spacetime is modified by the introduction of a matter shell, whose stress energy tensor depends on derivatives of these functions.
Renormalization group methods for the Reynolds stress transport equations
NASA Technical Reports Server (NTRS)
Rubinstein, R.
1992-01-01
The Yakhot-Orszag renormalization group is used to analyze the pressure gradient-velocity correlation and return to isotropy terms in the Reynolds stress transport equations. The perturbation series for the relevant correlations, evaluated to lowest order in the epsilon-expansion of the Yakhot-Orszag theory, are infinite series in tensor product powers of the mean velocity gradient and its transpose. Formal lowest order Pade approximations to the sums of these series produce a rapid pressure strain model of the form proposed by Launder, Reece, and Rodi, and a return to isotropy model of the form proposed by Rotta. In both cases, the model constants are computed theoretically. The predicted Reynolds stress ratios in simple shear flows are evaluated and compared with experimental data. The possibility is discussed of deriving higher order nonlinear models by approximating the sums more accurately. The Yakhot-Orszag renormalization group provides a systematic procedure for deriving turbulence models. Typical applications have included theoretical derivation of the universal constants of isotropic turbulence theory, such as the Kolmogorov constant, and derivation of two equation models, again with theoretically computed constants and low Reynolds number forms of the equations. Recent work has applied this formalism to Reynolds stress modeling, previously in the form of a nonlinear eddy viscosity representation of the Reynolds stresses, which can be used to model the simplest normal stress effects. The present work attempts to apply the Yakhot-Orszag formalism to Reynolds stress transport modeling.
Approximate knowledge compilation: The first order case
Val, A. del
1996-12-31
Knowledge compilation procedures make a knowledge base more explicit so as make inference with respect to the compiled knowledge base tractable or at least more efficient. Most work to date in this area has been restricted to the propositional case, despite the importance of first order theories for expressing knowledge concisely. Focusing on (LUB) approximate compilation, our contribution is twofold: (1) We present a new ground algorithm for approximate compilation which can produce exponential savings with respect to the previously known algorithm. (2) We show that both ground algorithms can be lifted to the first order case preserving their correctness for approximate compilation.
APPROXIMATING LIGHT RAYS IN THE SCHWARZSCHILD FIELD
Semerák, O.
2015-02-10
A short formula is suggested that approximates photon trajectories in the Schwarzschild field better than other simple prescriptions from the literature. We compare it with various ''low-order competitors'', namely, with those following from exact formulas for small M, with one of the results based on pseudo-Newtonian potentials, with a suitably adjusted hyperbola, and with the effective and often employed approximation by Beloborodov. Our main concern is the shape of the photon trajectories at finite radii, yet asymptotic behavior is also discussed, important for lensing. An example is attached indicating that the newly suggested approximation is usable—and very accurate—for practically solving the ray-deflection exercise.
Convergence of Numerical Approximations for a Non-Newtonian Model of Suspensions
NASA Astrophysics Data System (ADS)
Kapustyan, O. V.; Valero, J.; Kasyanov, P. O.; Giménez, A.; Amigó, J. M.
2015-12-01
In this paper, we prove the convergence of the numerical approximations of a scalar parabolic equation modeling a non-Newtonian fluid. We use finite-difference schemes and the well-known method of external approximations.
Adiabatic approximation for the density matrix
NASA Astrophysics Data System (ADS)
Band, Yehuda B.
1992-05-01
An adiabatic approximation for the Liouville density-matrix equation which includes decay terms is developed. The adiabatic approximation employs the eigenvectors of the non-normal Liouville operator. The approximation is valid when there exists a complete set of eigenvectors of the non-normal Liouville operator (i.e., the eigenvectors span the density-matrix space), the time rate of change of the Liouville operator is small, and an auxiliary matrix is nonsingular. Numerical examples are presented involving efficient population transfer in a molecule by stimulated Raman scattering, with the intermediate level of the molecule decaying on a time scale that is fast compared with the pulse durations of the pump and Stokes fields. The adiabatic density-matrix approximation can be simply used to determine the density matrix for atomic or molecular systems interacting with cw electromagnetic fields when spontaneous emission or other decay mechanisms prevail.
Approximate probability distributions of the master equation
NASA Astrophysics Data System (ADS)
Thomas, Philipp; Grima, Ramon
2015-07-01
Master equations are common descriptions of mesoscopic systems. Analytical solutions to these equations can rarely be obtained. We here derive an analytical approximation of the time-dependent probability distribution of the master equation using orthogonal polynomials. The solution is given in two alternative formulations: a series with continuous and a series with discrete support, both of which can be systematically truncated. While both approximations satisfy the system size expansion of the master equation, the continuous distribution approximations become increasingly negative and tend to oscillations with increasing truncation order. In contrast, the discrete approximations rapidly converge to the underlying non-Gaussian distributions. The theory is shown to lead to particularly simple analytical expressions for the probability distributions of molecule numbers in metabolic reactions and gene expression systems.
Linear Approximation SAR Azimuth Processing Study
NASA Technical Reports Server (NTRS)
Lindquist, R. B.; Masnaghetti, R. K.; Belland, E.; Hance, H. V.; Weis, W. G.
1979-01-01
A segmented linear approximation of the quadratic phase function that is used to focus the synthetic antenna of a SAR was studied. Ideal focusing, using a quadratic varying phase focusing function during the time radar target histories are gathered, requires a large number of complex multiplications. These can be largely eliminated by using linear approximation techniques. The result is a reduced processor size and chip count relative to ideally focussed processing and a correspondingly increased feasibility for spaceworthy implementation. A preliminary design and sizing for a spaceworthy linear approximation SAR azimuth processor meeting requirements similar to those of the SEASAT-A SAR was developed. The study resulted in a design with approximately 1500 IC's, 1.2 cubic feet of volume, and 350 watts of power for a single look, 4000 range cell azimuth processor with 25 meters resolution.
A Survey of Techniques for Approximate Computing
Mittal, Sparsh
2016-03-18
Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less
Kernel polynomial approximations for densities of states and spectral functions
Silver, R.N.; Voter, A.F.; Kress, J.D.; Roeder, H.
1996-03-01
Chebyshev polynomial approximations are an efficient and numerically stable way to calculate properties of the very large Hamiltonians important in computational condensed matter physics. The present paper derives an optimal kernal polynomial which enforces positivity of density of states and spectral estimates, achieves the best energy resolution, and preserves normalization. This kernel polynomial method (KPM) is demonstrated for electronic structure and dynamic magnetic susceptibility calculations. For tight binding Hamiltonians of Si, we show how to achieve high precision and rapid convergence of the cohesive energy and vacancy formation energy by careful attention to the order of approximation. For disordered XXZ-magnets, we show that the KPM provides a simpler and more reliable procedure for calculating spectral functions than Lanczos recursion methods. Polynomial approximations to Fermi projection operators are also proposed. 26 refs., 10 figs.
Efficient algorithm for approximating one-dimensional ground states
Aharonov, Dorit; Arad, Itai; Irani, Sandy
2010-07-15
The density-matrix renormalization-group method is very effective at finding ground states of one-dimensional (1D) quantum systems in practice, but it is a heuristic method, and there is no known proof for when it works. In this article we describe an efficient classical algorithm which provably finds a good approximation of the ground state of 1D systems under well-defined conditions. More precisely, our algorithm finds a matrix product state of bond dimension D whose energy approximates the minimal energy such states can achieve. The running time is exponential in D, and so the algorithm can be considered tractable even for D, which is logarithmic in the size of the chain. The result also implies trivially that the ground state of any local commuting Hamiltonian in 1D can be approximated efficiently; we improve this to an exact algorithm.
Polynomial approximation of functions in Sobolev spaces
NASA Technical Reports Server (NTRS)
Dupont, T.; Scott, R.
1980-01-01
Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomial plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.
Introduction to the Maxwell Garnett approximation: tutorial.
Markel, Vadim A
2016-07-01
This tutorial is devoted to the Maxwell Garnett approximation and related theories. Topics covered in this first, introductory part of the tutorial include the Lorentz local field correction, the Clausius-Mossotti relation and its role in the modern numerical technique known as the discrete dipole approximation, the Maxwell Garnett mixing formula for isotropic and anisotropic media, multicomponent mixtures and the Bruggeman equation, the concept of smooth field, and Wiener and Bergman-Milton bounds. PMID:27409680
The Actinide Transition Revisited by Gutzwiller Approximation
NASA Astrophysics Data System (ADS)
Xu, Wenhu; Lanata, Nicola; Yao, Yongxin; Kotliar, Gabriel
2015-03-01
We revisit the problem of the actinide transition using the Gutzwiller approximation (GA) in combination with the local density approximation (LDA). In particular, we compute the equilibrium volumes of the actinide series and reproduce the abrupt change of density found experimentally near plutonium as a function of the atomic number. We discuss how this behavior relates with the electron correlations in the 5 f states, the lattice structure, and the spin-orbit interaction. Our results are in good agreement with the experiments.
Polynomial approximation of functions in Sobolev spaces
Dupont, T.; Scott, R.
1980-04-01
Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.
Computing functions by approximating the input
NASA Astrophysics Data System (ADS)
Goldberg, Mayer
2012-12-01
In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their output. Our approach assumes only the most rudimentary knowledge of algebra and trigonometry, and makes no use of calculus.
An improved proximity force approximation for electrostatics
Fosco, Cesar D.; Lombardo, Fernando C.; Mazzitelli, Francisco D.
2012-08-15
A quite straightforward approximation for the electrostatic interaction between two perfectly conducting surfaces suggests itself when the distance between them is much smaller than the characteristic lengths associated with their shapes. Indeed, in the so called 'proximity force approximation' the electrostatic force is evaluated by first dividing each surface into a set of small flat patches, and then adding up the forces due two opposite pairs, the contributions of which are approximated as due to pairs of parallel planes. This approximation has been widely and successfully applied in different contexts, ranging from nuclear physics to Casimir effect calculations. We present here an improvement on this approximation, based on a derivative expansion for the electrostatic energy contained between the surfaces. The results obtained could be useful for discussing the geometric dependence of the electrostatic force, and also as a convenient benchmark for numerical analyses of the tip-sample electrostatic interaction in atomic force microscopes. - Highlights: Black-Right-Pointing-Pointer The proximity force approximation (PFA) has been widely used in different areas. Black-Right-Pointing-Pointer The PFA can be improved using a derivative expansion in the shape of the surfaces. Black-Right-Pointing-Pointer We use the improved PFA to compute electrostatic forces between conductors. Black-Right-Pointing-Pointer The results can be used as an analytic benchmark for numerical calculations in AFM. Black-Right-Pointing-Pointer Insight is provided for people who use the PFA to compute nuclear and Casimir forces.
UNAERO: A package of FORTRAN subroutines for approximating unsteady aerodynamics in the time domain
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1985-01-01
This report serves as an instruction and maintenance manual for a collection of CDC CYBER FORTRAN IV subroutines for approximating the unsteady aerodynamic forces in the time domain. The result is a set of constant-coefficient first-order differential equations that approximate the dynamics of the vehicle. Provisions are included for adjusting the number of modes used for calculating the approximations so that an accurate approximation is generated. The number of data points at different values of reduced frequency can also be varied to adjust the accuracy of the approximation over the reduced-frequency range. The denominator coefficients of the approximation may be calculated by means of a gradient method or a least-squares approximation technique. Both the approximation methods use weights on the residual error. A new set of system equations, at a different dynamic pressure, can be generated without the approximations being recalculated.
Coronal Loops: Evolving Beyond the Isothermal Approximation
NASA Astrophysics Data System (ADS)
Schmelz, J. T.; Cirtain, J. W.; Allen, J. D.
2002-05-01
Are coronal loops isothermal? A controversy over this question has arisen recently because different investigators using different techniques have obtained very different answers. Analysis of SOHO-EIT and TRACE data using narrowband filter ratios to obtain temperature maps has produced several key publications that suggest that coronal loops may be isothermal. We have constructed a multi-thermal distribution for several pixels along a relatively isolated coronal loop on the southwest limb of the solar disk using spectral line data from SOHO-CDS taken on 1998 Apr 20. These distributions are clearly inconsistent with isothermal plasma along either the line of sight or the length of the loop, and suggested rather that the temperature increases from the footpoints to the loop top. We speculated originally that these differences could be attributed to pixel size -- CDS pixels are larger, and more `contaminating' material would be expected along the line of sight. To test this idea, we used CDS iron line ratios from our data set to mimic the isothermal results from the narrowband filter instruments. These ratios indicated that the temperature gradient along the loop was flat, despite the fact that a more complete analysis of the same data showed this result to be false! The CDS pixel size was not the cause of the discrepancy; rather, the problem lies with the isothermal approximation used in EIT and TRACE analysis. These results should serve as a strong warning to anyone using this simplistic method to obtain temperature. This warning is echoed on the EIT web page: ``Danger! Enter at your own risk!'' In other words, values for temperature may be found, but they may have nothing to do with physical reality. Solar physics research at the University of Memphis is supported by NASA grant NAG5-9783. This research was funded in part by the NASA/TRACE MODA grant for Montana State University.
An approximation formula for a class of Markov reliability models
NASA Technical Reports Server (NTRS)
White, A. L.
1984-01-01
A way of considering a small but often used class of reliability model and approximating algebraically the systems reliability is shown. The models considered are appropriate for redundant reconfigurable digital control systems that operate for a short period of time without maintenance, and for such systems the method gives a formula in terms of component fault rates, system recovery rates, and system operating time.
An application of artificial neural networks to experimental data approximation
NASA Technical Reports Server (NTRS)
Meade, Andrew J., Jr.
1993-01-01
As an initial step in the evaluation of networks, a feedforward architecture is trained to approximate experimental data by the backpropagation algorithm. Several drawbacks were detected and an alternative learning algorithm was then developed to partially address the drawbacks. This noniterative algorithm has a number of advantages over the backpropagation method and is easily implemented on existing hardware.
Engine With Regression and Neural Network Approximators Designed
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.
2001-01-01
At the NASA Glenn Research Center, the NASA engine performance program (NEPP, ref. 1) and the design optimization testbed COMETBOARDS (ref. 2) with regression and neural network analysis-approximators have been coupled to obtain a preliminary engine design methodology. The solution to a high-bypass-ratio subsonic waverotor-topped turbofan engine, which is shown in the preceding figure, was obtained by the simulation depicted in the following figure. This engine is made of 16 components mounted on two shafts with 21 flow stations. The engine is designed for a flight envelope with 47 operating points. The design optimization utilized both neural network and regression approximations, along with the cascade strategy (ref. 3). The cascade used three algorithms in sequence: the method of feasible directions, the sequence of unconstrained minimizations technique, and sequential quadratic programming. The normalized optimum thrusts obtained by the three methods are shown in the following figure: the cascade algorithm with regression approximation is represented by a triangle, a circle is shown for the neural network solution, and a solid line indicates original NEPP results. The solutions obtained from both approximate methods lie within one standard deviation of the benchmark solution for each operating point. The simulation improved the maximum thrust by 5 percent. The performance of the linear regression and neural network methods as alternate engine analyzers was found to be satisfactory for the analysis and operation optimization of air-breathing propulsion engines (ref. 4).
Analytic Approximations for the Extrapolation of Lattice Data
Masjuan, Pere
2010-12-22
We present analytic approximations of chiral SU(3) amplitudes for the extrapolation of lattice data to the physical masses and the determination of Next-to-Next-to-Leading-Order low-energy constants. Lattice data for the ratio F{sub K}/F{sub {pi}} is used to test the method.
New approximating results for data with errors in both variables
NASA Astrophysics Data System (ADS)
Bogdanova, N.; Todorov, S.
2015-05-01
We introduce new data from mineral water probe Lenovo Bulgaria, measured with errors in both variables. We apply our Orthonormal Polynomial Expansion Method (OPEM), based on Forsythe recurrence formula to describe the data in the new error corridor. The development of OPEM gives the approximating curves and their derivatives in optimal orthonormal and usual expansions including the errors in both variables with special criteria.
Symmetric approximations of the Navier-Stokes equations
Kobel'kov, G M
2002-08-31
A new method for the symmetric approximation of the non-stationary Navier-Stokes equations by a Cauchy-Kovalevskaya-type system is proposed. Properties of the modified problem are studied. In particular, the convergence as {epsilon}{yields}0 of the solutions of the modified problem to the solutions of the original problem on an infinite interval is established.
Symmetric approximations of the Navier-Stokes equations
NASA Astrophysics Data System (ADS)
Kobel'kov, G. M.
2002-08-01
A new method for the symmetric approximation of the non-stationary Navier-Stokes equations by a Cauchy-Kovalevskaya-type system is proposed. Properties of the modified problem are studied. In particular, the convergence as \\varepsilon\\to0 of the solutions of the modified problem to the solutions of the original problem on an infinite interval is established.