NASA Astrophysics Data System (ADS)
Jiang, Shidong; Luo, Li-Shi
2016-07-01
The integral equation for the flow velocity u (x ; k) in the steady Couette flow derived from the linearized Bhatnagar-Gross-Krook-Welander kinetic equation is studied in detail both theoretically and numerically in a wide range of the Knudsen number k between 0.003 and 100.0. First, it is shown that the integral equation is a Fredholm equation of the second kind in which the norm of the compact integral operator is less than 1 on Lp for any 1 ≤ p ≤ ∞ and thus there exists a unique solution to the integral equation via the Neumann series. Second, it is shown that the solution is logarithmically singular at the endpoints. More precisely, if x = 0 is an endpoint, then the solution can be expanded as a double power series of the form ∑n=0∞∑m=0∞cn,mxn(xln x) m about x = 0 on a small interval x ∈ (0 , a) for some a > 0. And third, a high-order adaptive numerical algorithm is designed to compute the solution numerically to high precision. The solutions for the flow velocity u (x ; k), the stress Pxy (k), and the half-channel mass flow rate Q (k) are obtained in a wide range of the Knudsen number 0.003 ≤ k ≤ 100.0; and these solutions are accurate for at least twelve significant digits or better, thus they can be used as benchmark solutions.
ERIC Educational Resources Information Center
Sozio, Gerry
2009-01-01
Senior secondary students cover numerical integration techniques in their mathematics courses. In particular, students would be familiar with the "midpoint rule," the elementary "trapezoidal rule" and "Simpson's rule." This article derives these techniques by methods which secondary students may not be familiar with and an approach that…
Accurate Evaluation of Quantum Integrals
NASA Technical Reports Server (NTRS)
Galant, David C.; Goorvitch, D.
1994-01-01
Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schr\\"{o}dinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.
Accurate numerical solutions of conservative nonlinear oscillators
NASA Astrophysics Data System (ADS)
Khan, Najeeb Alam; Nasir Uddin, Khan; Nadeem Alam, Khan
2014-12-01
The objective of this paper is to present an investigation to analyze the vibration of a conservative nonlinear oscillator in the form u" + lambda u + u^(2n-1) + (1 + epsilon^2 u^(4m))^(1/2) = 0 for any arbitrary power of n and m. This method converts the differential equation to sets of algebraic equations and solve numerically. We have presented for three different cases: a higher order Duffing equation, an equation with irrational restoring force and a plasma physics equation. It is also found that the method is valid for any arbitrary order of n and m. Comparisons have been made with the results found in the literature the method gives accurate results.
Rythmos Numerical Integration Package
Coffey, Todd S.; Bartlett, Roscoe A.
2006-09-01
Rythmos numerically integrates transient differential equations. The differential equations can be explicit or implicit ordinary differential equations ofr formulated as fully implicit differential-algebraic equations. Methods include backward Euler, forward Euler, explicit Runge-Kutta, and implicit BDF at this time. Native support for operator split methods and strict modularity are strong design goals. Forward sensitivity computations will be included in the first release with adjoint sensitivities coming in the near future. Rythmos heavily relies on Thyra for linear algebra and nonlinear solver interfaces to AztecOO, Amesos, IFPack, and NOX in Tilinos. Rythmos is specially suited for stiff differential equations and thos applictions where operator split methods have a big advantage, e.g. Computational fluid dynamics, convection-diffusion equations, etc.
Rythmos Numerical Integration Package
Energy Science and Technology Software Center (ESTSC)
2006-09-01
Rythmos numerically integrates transient differential equations. The differential equations can be explicit or implicit ordinary differential equations ofr formulated as fully implicit differential-algebraic equations. Methods include backward Euler, forward Euler, explicit Runge-Kutta, and implicit BDF at this time. Native support for operator split methods and strict modularity are strong design goals. Forward sensitivity computations will be included in the first release with adjoint sensitivities coming in the near future. Rythmos heavily relies on Thyra formore » linear algebra and nonlinear solver interfaces to AztecOO, Amesos, IFPack, and NOX in Tilinos. Rythmos is specially suited for stiff differential equations and thos applictions where operator split methods have a big advantage, e.g. Computational fluid dynamics, convection-diffusion equations, etc.« less
NASA Technical Reports Server (NTRS)
Graves, R. A., Jr.
1975-01-01
The previously obtained second-order-accurate partial implicitization numerical technique used in the solution of fluid dynamic problems was modified with little complication to achieve fourth-order accuracy. The Von Neumann stability analysis demonstrated the unconditional linear stability of the technique. The order of the truncation error was deduced from the Taylor series expansions of the linearized difference equations and was verified by numerical solutions to Burger's equation. For comparison, results were also obtained for Burger's equation using a second-order-accurate partial-implicitization scheme, as well as the fourth-order scheme of Kreiss.
Numerical integration of subtraction terms
NASA Astrophysics Data System (ADS)
Seth, Satyajit; Weinzierl, Stefan
2016-06-01
Numerical approaches to higher-order calculations often employ subtraction terms, both for the real emission and the virtual corrections. These subtraction terms have to be added back. In this paper we show that at NLO the real subtraction terms, the virtual subtraction terms, the integral representations of the field renormalization constants and—in the case of initial-state partons—the integral representation for the collinear counterterm can be grouped together to give finite integrals, which can be evaluated numerically. This is useful for an extension towards next-to-next-to-leading order.
The development of accurate and efficient methods of numerical quadrature
NASA Technical Reports Server (NTRS)
Feagin, T.
1973-01-01
Some new methods for performing numerical quadrature of an integrable function over a finite interval are described. Each method provides a sequence of approximations of increasing order to the value of the integral. Each approximation makes use of all previously computed values of the integrand. The points at which new values of the integrand are computed are selected in such a way that the order of the approximation is maximized. The methods are compared with the quadrature methods of Clenshaw and Curtis, Gauss, Patterson, and Romberg using several examples.
Accurate complex scaling of three dimensional numerical potentials
Cerioni, Alessandro; Genovese, Luigi; Duchemin, Ivan; Deutsch, Thierry
2013-05-28
The complex scaling method, which consists in continuing spatial coordinates into the complex plane, is a well-established method that allows to compute resonant eigenfunctions of the time-independent Schroedinger operator. Whenever it is desirable to apply the complex scaling to investigate resonances in physical systems defined on numerical discrete grids, the most direct approach relies on the application of a similarity transformation to the original, unscaled Hamiltonian. We show that such an approach can be conveniently implemented in the Daubechies wavelet basis set, featuring a very promising level of generality, high accuracy, and no need for artificial convergence parameters. Complex scaling of three dimensional numerical potentials can be efficiently and accurately performed. By carrying out an illustrative resonant state computation in the case of a one-dimensional model potential, we then show that our wavelet-based approach may disclose new exciting opportunities in the field of computational non-Hermitian quantum mechanics.
Fresnel Integral Equations: Numerical Properties
Adams, R J; Champagne, N J II; Davis, B A
2003-07-22
A spatial-domain solution to the problem of electromagnetic scattering from a dielectric half-space is outlined. The resulting half-space operators are referred to as Fresnel surface integral operators. When used as preconditioners for nonplanar geometries, the Fresnel operators yield surface Fresnel integral equations (FIEs) which are stable with respect to dielectric constant, discretization, and frequency. Numerical properties of the formulations are discussed.
Cuba: Multidimensional numerical integration library
NASA Astrophysics Data System (ADS)
Hahn, Thomas
2016-08-01
The Cuba library offers four independent routines for multidimensional numerical integration: Vegas, Suave, Divonne, and Cuhre. The four algorithms work by very different methods, and can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Their invocation is very similar, making it easy to cross-check by substituting one method by another. For further safeguarding, the output is supplemented by a chi-square probability which quantifies the reliability of the error estimate.
Accurate Critical Stress Intensity Factor Griffith Crack Theory Measurements by Numerical Techniques
Petersen, Richard C.
2014-01-01
Critical stress intensity factor (KIc) has been an approximation for fracture toughness using only load-cell measurements. However, artificial man-made cracks several orders of magnitude longer and wider than natural flaws have required a correction factor term (Y) that can be up to about 3 times the recorded experimental value [1-3]. In fact, over 30 years ago a National Academy of Sciences advisory board stated that empirical KIc testing was of serious concern and further requested that an accurate bulk fracture toughness method be found [4]. Now that fracture toughness can be calculated accurately by numerical integration from the load/deflection curve as resilience, work of fracture (WOF) and strain energy release (SIc) [5, 6], KIc appears to be unnecessary. However, the large body of previous KIc experimental test results found in the literature offer the opportunity for continued meta analysis with other more practical and accurate fracture toughness results using energy methods and numerical integration. Therefore, KIc is derived from the classical Griffith Crack Theory [6] to include SIc as a more accurate term for strain energy release rate (𝒢Ic), along with crack surface energy (γ), crack length (a), modulus (E), applied stress (σ), Y, crack-tip plastic zone defect region (rp) and yield strength (σys) that can all be determined from load and deflection data. Polymer matrix discontinuous quartz fiber-reinforced composites to accentuate toughness differences were prepared for flexural mechanical testing comprising of 3 mm fibers at different volume percentages from 0-54.0 vol% and at 28.2 vol% with different fiber lengths from 0.0-6.0 mm. Results provided a new correction factor and regression analyses between several numerical integration fracture toughness test methods to support KIc results. Further, bulk KIc accurate experimental values are compared with empirical test results found in literature. Also, several fracture toughness mechanisms
Efficient numerical evaluation of Feynman integrals
NASA Astrophysics Data System (ADS)
Li, Zhao; Wang, Jian; Yan, Qi-Shu; Zhao, Xiaoran
2016-03-01
Feynman loop integrals are a key ingredient for the calculation of higher order radiation effects, and are responsible for reliable and accurate theoretical prediction. We improve the efficiency of numerical integration in sector decomposition by implementing a quasi-Monte Carlo method associated with the CUDA/GPU technique. For demonstration we present the results of several Feynman integrals up to two loops in both Euclidean and physical kinematic regions in comparison with those obtained from FIESTA3. It is shown that both planar and non-planar two-loop master integrals in the physical kinematic region can be evaluated in less than half a minute with accuracy, which makes the direct numerical approach viable for precise investigation of higher order effects in multi-loop processes, e.g. the next-to-leading order QCD effect in Higgs pair production via gluon fusion with a finite top quark mass. Supported by the Natural Science Foundation of China (11305179 11475180), Youth Innovation Promotion Association, CAS, IHEP Innovation (Y4545170Y2), State Key Lab for Electronics and Particle Detectors, Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y4KF061CJ1), Cluster of Excellence Precision Physics, Fundamental Interactions and Structure of Matter (PRISMA-EXC 1098)
Fast and Accurate Learning When Making Discrete Numerical Estimates.
Sanborn, Adam N; Beierholm, Ulrik R
2016-04-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155
Fast and Accurate Learning When Making Discrete Numerical Estimates
Sanborn, Adam N.; Beierholm, Ulrik R.
2016-01-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155
Accurate numerical solution of compressible, linear stability equations
NASA Technical Reports Server (NTRS)
Malik, M. R.; Chuang, S.; Hussaini, M. Y.
1982-01-01
The present investigation is concerned with a fourth order accurate finite difference method and its application to the study of the temporal and spatial stability of the three-dimensional compressible boundary layer flow on a swept wing. This method belongs to the class of compact two-point difference schemes discussed by White (1974) and Keller (1974). The method was apparently first used for solving the two-dimensional boundary layer equations. Attention is given to the governing equations, the solution technique, and the search for eigenvalues. A general purpose subroutine is employed for solving a block tridiagonal system of equations. The computer time can be reduced significantly by exploiting the special structure of two matrices.
Numerical Integration: One Step at a Time
ERIC Educational Resources Information Center
Yang, Yajun; Gordon, Sheldon P.
2016-01-01
This article looks at the effects that adding a single extra subdivision has on the level of accuracy of some common numerical integration routines. Instead of automatically doubling the number of subdivisions for a numerical integration rule, we investigate what happens with a systematic method of judiciously selecting one extra subdivision for…
Accurate and efficient spin integration for particle accelerators
NASA Astrophysics Data System (ADS)
Abell, Dan T.; Meiser, Dominic; Ranjbar, Vahid H.; Barber, Desmond P.
2015-02-01
Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code gpuSpinTrack. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations. We evaluate their performance and accuracy in quantitative detail for individual elements as well as for the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.
An Integrative Theory of Numerical Development
ERIC Educational Resources Information Center
Siegler, Robert; Lortie-Forgues, Hugues
2014-01-01
Understanding of numerical development is growing rapidly, but the volume and diversity of findings can make it difficult to perceive any coherence in the process. The integrative theory of numerical development posits that a coherent theme is present, however--progressive broadening of the set of numbers whose magnitudes can be accurately…
Numerical solution of boundary-integral equations for molecular electrostatics.
Bardhan, Jaydeep P
2009-03-01
Numerous molecular processes, such as ion permeation through channel proteins, are governed by relatively small changes in energetics. As a result, theoretical investigations of these processes require accurate numerical methods. In the present paper, we evaluate the accuracy of two approaches to simulating boundary-integral equations for continuum models of the electrostatics of solvation. The analysis emphasizes boundary-element method simulations of the integral-equation formulation known as the apparent-surface-charge (ASC) method or polarizable-continuum model (PCM). In many numerical implementations of the ASC/PCM model, one forces the integral equation to be satisfied exactly at a set of discrete points on the boundary. We demonstrate in this paper that this approach to discretization, known as point collocation, is significantly less accurate than an alternative approach known as qualocation. Furthermore, the qualocation method offers this improvement in accuracy without increasing simulation time. Numerical examples demonstrate that electrostatic part of the solvation free energy, when calculated using the collocation and qualocation methods, can differ significantly; for a polypeptide, the answers can differ by as much as 10 kcal/mol (approximately 4% of the total electrostatic contribution to solvation). The applicability of the qualocation discretization to other integral-equation formulations is also discussed, and two equivalences between integral-equation methods are derived. PMID:19275391
On constructing accurate approximations of first integrals for difference equations
NASA Astrophysics Data System (ADS)
Rafei, M.; Van Horssen, W. T.
2013-04-01
In this paper, a perturbation method based on invariance factors and multiple scales will be presented for weakly nonlinear, regularly perturbed systems of ordinary difference equations. Asymptotic approximations of first integrals will be constructed on long iteration-scales, that is, on iteration-scales of order ɛ-1, where ɛ is a small parameter. It will be shown that all invariance factors have to satisfy a functional equation. To show how this perturbation method works, the method is applied to a Van der Pol equation, and a Rayleigh equation. It will be explicitly shown for the first time in the literature how these multiple scales should be introduced for systems of difference equations to obtain very accurate approximations of first integrals on long iteration-scales.
Numerical computation of 2D Sommerfeld integrals - Decomposition of the angular integral
NASA Astrophysics Data System (ADS)
Dvorak, Steven L.; Kuester, Edward F.
1992-02-01
The computational efficiency of the 2D Sommerfeld integrals is shown to undergo improvement through the discovery of novel ways to compute the inner angular integral in polar representations. It is shown that the angular integral can be decomposed into a finite number of incomplete Lipschitz-Hankel integrals; these can in turn be calculated through a series of expansions, so that the angular integral can be computed by summing a series rather than applying a standard numerical integration algorithm. The technique is most efficient and accurate when piecewise-sinusoidal basis functions are employed to analyze a printed strip-dipole antenna in a layered medium.
Orbital Advection by Interpolation: A Fast and Accurate Numerical Scheme for Super-Fast MHD Flows
Johnson, B M; Guan, X; Gammie, F
2008-04-11
In numerical models of thin astrophysical disks that use an Eulerian scheme, gas orbits supersonically through a fixed grid. As a result the timestep is sharply limited by the Courant condition. Also, because the mean flow speed with respect to the grid varies with position, the truncation error varies systematically with position. For hydrodynamic (unmagnetized) disks an algorithm called FARGO has been developed that advects the gas along its mean orbit using a separate interpolation substep. This relaxes the constraint imposed by the Courant condition, which now depends only on the peculiar velocity of the gas, and results in a truncation error that is more nearly independent of position. This paper describes a FARGO-like algorithm suitable for evolving magnetized disks. Our method is second order accurate on a smooth flow and preserves {del} {center_dot} B = 0 to machine precision. The main restriction is that B must be discretized on a staggered mesh. We give a detailed description of an implementation of the code and demonstrate that it produces the expected results on linear and nonlinear problems. We also point out how the scheme might be generalized to make the integration of other supersonic/super-fast flows more efficient. Although our scheme reduces the variation of truncation error with position, it does not eliminate it. We show that the residual position dependence leads to characteristic radial variations in the density over long integrations.
Generation of accurate integral surfaces in time-dependent vector fields.
Garth, Christoph; Krishnan, Han; Tricoche, Xavier; Bobach, Tom; Joy, Kenneth I
2008-01-01
We present a novel approach for the direct computation of integral surfaces in time-dependent vector fields. As opposed to previous work, which we analyze in detail, our approach is based on a separation of integral surface computation into two stages: surface approximation and generation of a graphical representation. This allows us to overcome several limitations of existing techniques. We first describe an algorithm for surface integration that approximates a series of time lines using iterative refinement and computes a skeleton of the integral surface. In a second step, we generate a well-conditioned triangulation. Our approach allows a highly accurate treatment of very large time-varying vector fields in an efficient, streaming fashion. We examine the properties of the presented methods on several example datasets and perform a numerical study of its correctness and accuracy. Finally, we investigate some visualization aspects of integral surfaces. PMID:18988990
Accurate object tracking system by integrating texture and depth cues
NASA Astrophysics Data System (ADS)
Chen, Ju-Chin; Lin, Yu-Hang
2016-03-01
A robust object tracking system that is invariant to object appearance variations and background clutter is proposed. Multiple instance learning with a boosting algorithm is applied to select discriminant texture information between the object and background data. Additionally, depth information, which is important to distinguish the object from a complicated background, is integrated. We propose two depth-based models that can compensate texture information to cope with both appearance variants and background clutter. Moreover, in order to reduce the risk of drifting problem increased for the textureless depth templates, an update mechanism is proposed to select more precise tracking results to avoid incorrect model updates. In the experiments, the robustness of the proposed system is evaluated and quantitative results are provided for performance analysis. Experimental results show that the proposed system can provide the best success rate and has more accurate tracking results than other well-known algorithms.
Effects of aliasing on numerical integration.
Edwards, Timothy S.
2005-02-01
During the course of processing acceleration data from mechanical systems it is often desirable to integrate the data to obtain velocity or displacement waveforms. However, those who have attempted these operations may be painfully aware that the integrated records often yield unrealistic residual values. This is true whether the data has been obtained experimentally or through numerical simulation such as Runge-Kutta integration or the explicit finite element method. In the case of experimentally obtained data, the integration errors are usually blamed on accelerometer zero shift or amplifier saturation. In the case of simulation data, incorrect integrations are often incorrectly blamed on the integration algorithm itself. This work demonstrates that seemingly small aliased content can cause appreciable errors in the integrated waveforms and explores the unavoidable source of aliasing in both experiment and simulation-the sampling operation. Numerical analysts are often puzzled as to why the integrated acceleration from their simulation does not match the displacement output from the same simulation. This work shows that these strange results can be caused by aliasing induced by interpolation of the model output during sampling regularization.
Automatic numerical integration methods for Feynman integrals through 3-loop
NASA Astrophysics Data System (ADS)
de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Olagbemi, O.
2015-05-01
We give numerical integration results for Feynman loop diagrams through 3-loop such as those covered by Laporta [1]. The methods are based on automatic adaptive integration, using iterated integration and extrapolation with programs from the QUADPACK package, or multivariate techniques from the ParInt package. The Dqags algorithm from QuadPack accommodates boundary singularities of fairly general types. PARINT is a package for multivariate integration layered over MPI (Message Passing Interface), which runs on clusters and incorporates advanced parallel/distributed techniques such as load balancing among processes that may be distributed over a network of nodes. Results are included for 3-loop self-energy diagrams without IR (infra-red) or UV (ultra-violet) singularities. A procedure based on iterated integration and extrapolation yields a novel method of numerical regularization for integrals with UV terms, and is applied to a set of 2-loop self-energy diagrams with UV singularities.
Meek, Garrett A; Levine, Benjamin G
2014-07-01
Spikes in the time-derivative coupling (TDC) near surface crossings make the accurate integration of the time-dependent Schrödinger equation in nonadiabatic molecular dynamics simulations a challenge. To address this issue, we present an approximation to the TDC based on a norm-preserving interpolation (NPI) of the adiabatic electronic wave functions within each time step. We apply NPI and two other schemes for computing the TDC in numerical simulations of the Landau-Zener model, comparing the simulated transfer probabilities to the exact solution. Though NPI does not require the analytical calculation of nonadiabatic coupling matrix elements, it consistently yields unsigned population transfer probability errors of ∼0.001, whereas analytical calculation of the TDC yields errors of 0.0-1.0 depending on the time step, the offset of the maximum in the TDC from the beginning of the time step, and the coupling strength. The approximation of Hammes-Schiffer and Tully yields errors intermediate between NPI and the analytical scheme. PMID:26279558
Highly Parallel, High-Precision Numerical Integration
Bailey, David H.; Borwein, Jonathan M.
2005-04-22
This paper describes a scheme for rapidly computing numerical values of definite integrals to very high accuracy, ranging from ordinary machine precision to hundreds or thousands of digits, even for functions with singularities or infinite derivatives at endpoints. Such a scheme is of interest not only in computational physics and computational chemistry, but also in experimental mathematics, where high-precision numerical values of definite integrals can be used to numerically discover new identities. This paper discusses techniques for a parallel implementation of this scheme, then presents performance results for 1-D and 2-D test suites. Results are also given for a certain problem from mathematical physics, which features a difficult singularity, confirming a conjecture to 20,000 digit accuracy. The performance rate for this latter calculation on 1024 CPUs is 690 Gflop/s. We believe that this and one other 20,000-digit integral evaluation that we report are the highest-precision non-trivial numerical integrations performed to date.
Numerical multi-loop integrals and applications
NASA Astrophysics Data System (ADS)
Freitas, A.
2016-09-01
Higher-order radiative corrections play an important role in precision studies of the electroweak and Higgs sector, as well as for the detailed understanding of large backgrounds to new physics searches. For corrections beyond the one-loop level and involving many independent mass and momentum scales, it is in general not possible to find analytic results, so that one needs to resort to numerical methods instead. This article presents an overview of a variety of numerical loop integration techniques, highlighting their range of applicability, suitability for automatization, and numerical precision and stability. In a second part of this article, the application of numerical loop integration methods in the area of electroweak precision tests is illustrated. Numerical methods were essential for obtaining full two-loop predictions for the most important precision observables within the Standard Model. The theoretical foundations for these corrections will be described in some detail, including aspects of the renormalization, resummation of leading log contributions, and the evaluation of the theory uncertainty from missing higher orders.
An Integrative Method for Accurate Comparative Genome Mapping
Swidan, Firas; Rocha, Eduardo P. C; Shmoish, Michael; Pinter, Ron Y
2006-01-01
We present MAGIC, an integrative and accurate method for comparative genome mapping. Our method consists of two phases: preprocessing for identifying “maximal similar segments,” and mapping for clustering and classifying these segments. MAGIC's main novelty lies in its biologically intuitive clustering approach, which aims towards both calculating reorder-free segments and identifying orthologous segments. In the process, MAGIC efficiently handles ambiguities resulting from duplications that occurred before the speciation of the considered organisms from their most recent common ancestor. We demonstrate both MAGIC's robustness and scalability: the former is asserted with respect to its initial input and with respect to its parameters' values. The latter is asserted by applying MAGIC to distantly related organisms and to large genomes. We compare MAGIC to other comparative mapping methods and provide detailed analysis of the differences between them. Our improvements allow a comprehensive study of the diversity of genetic repertoires resulting from large-scale mutations, such as indels and duplications, including explicitly transposable and phagic elements. The strength of our method is demonstrated by detailed statistics computed for each type of these large-scale mutations. MAGIC enabled us to conduct a comprehensive analysis of the different forces shaping prokaryotic genomes from different clades, and to quantify the importance of novel gene content introduced by horizontal gene transfer relative to gene duplication in bacterial genome evolution. We use these results to investigate the breakpoint distribution in several prokaryotic genomes. PMID:16933978
Numerical methods for engine-airframe integration
Murthy, S.N.B.; Paynter, G.C.
1986-01-01
Various papers on numerical methods for engine-airframe integration are presented. The individual topics considered include: scientific computing environment for the 1980s, overview of prediction of complex turbulent flows, numerical solutions of the compressible Navier-Stokes equations, elements of computational engine/airframe integrations, computational requirements for efficient engine installation, application of CAE and CFD techniques to complete tactical missile design, CFD applications to engine/airframe integration, and application of a second-generation low-order panel methods to powerplant installation studies. Also addressed are: three-dimensional flow analysis of turboprop inlet and nacelle configurations, application of computational methods to the design of large turbofan engine nacelles, comparison of full potential and Euler solution algorithms for aeropropulsive flow field computations, subsonic/transonic, supersonic nozzle flows and nozzle integration, subsonic/transonic prediction capabilities for nozzle/afterbody configurations, three-dimensional viscous design methodology of supersonic inlet systems for advanced technology aircraft, and a user's technology assessment.
Liu, Fang; Lin, Lin; Vigil-Fowler, Derek; Lischner, Johannes; Kemper, Alexander F.; Sharifzadeh, Sahar; Jornada, Felipe H. da; Deslippe, Jack; Yang, Chao; and others
2015-04-01
We present a numerical integration scheme for evaluating the convolution of a Green's function with a screened Coulomb potential on the real axis in the GW approximation of the self energy. Our scheme takes the zero broadening limit in Green's function first, replaces the numerator of the integrand with a piecewise polynomial approximation, and performs principal value integration on subintervals analytically. We give the error bound of our numerical integration scheme and show by numerical examples that it is more reliable and accurate than the standard quadrature rules such as the composite trapezoidal rule. We also discuss the benefit of using different self energy expressions to perform the numerical convolution at different frequencies.
Research on the Evolutionary Strategy Based on AIS and Its Application on Numerical Integration
NASA Astrophysics Data System (ADS)
Bei, Li
Based on the features of artificial immune system, a new evolutionary strategy is proposed in order to calculate the numerical integration of functions. This evolutionary strategy includes the mechanisms of swarm searching and constructing the fitness function. Finally, numerical examples are given for verifying the effectiveness of evolutionary strategy. The results show that the performance of evolutionary strategy is satisfactory and more accurate than traditional methods of numerical integration, such as trapezoid formula and Simpson formula.
NASA Technical Reports Server (NTRS)
VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R,; Hathaway, Michael D.; Okiishi, Theodore H.
2000-01-01
The tip clearance flows of transonic compressor rotors are important because they have a significant impact on rotor and stage performance. While numerical simulations of these flows are quite sophisticated. they are seldom verified through rigorous comparisons of numerical and measured data because these kinds of measurements are rare in the detail necessary to be useful in high-speed machines. In this paper we compare measured tip clearance flow details (e.g. trajectory and radial extent) with corresponding data obtained from a numerical simulation. Recommendations for achieving accurate numerical simulation of tip clearance flows are presented based on this comparison. Laser Doppler Velocimeter (LDV) measurements acquired in a transonic compressor rotor, NASA Rotor 35, are used. The tip clearance flow field of this transonic rotor was simulated using a Navier-Stokes turbomachinery solver that incorporates an advanced k-epsilon turbulence model derived for flows that are not in local equilibrium. Comparison between measured and simulated results indicates that simulation accuracy is primarily dependent upon the ability of the numerical code to resolve important details of a wall-bounded shear layer formed by the relative motion between the over-tip leakage flow and the shroud wall. A simple method is presented for determining the strength of this shear layer.
ACCURATE BUILDING INTEGRATED PHOTOVOLTAIC SYSTEM (BIPV) ARCHITECTURAL DESIGN TOOL
One of the leading areas of renewable energy applications for the twenty-first century is building integrated photovoltaics (BIPV). Integrating photovoltaics into building structures allows the costs of the PV system to be partially offset by the solar modules also serving a s...
Seth A Veitzer
2008-10-21
Effects of stray electrons are a main factor limiting performance of many accelerators. Because heavy-ion fusion (HIF) accelerators will operate in regimes of higher current and with walls much closer to the beam than accelerators operating today, stray electrons might have a large, detrimental effect on the performance of an HIF accelerator. A primary source of stray electrons is electrons generated when halo ions strike the beam pipe walls. There is some research on these types of secondary electrons for the HIF community to draw upon, but this work is missing one crucial ingredient: the effect of grazing incidence. The overall goal of this project was to develop the numerical tools necessary to accurately model the effect of grazing incidence on the behavior of halo ions in a HIF accelerator, and further, to provide accurate models of heavy ion stopping powers with applications to ICF, WDM, and HEDP experiments.
NASA Astrophysics Data System (ADS)
Blackman, Jonathan; Field, Scott E.; Galley, Chad R.; Szilágyi, Béla; Scheel, Mark A.; Tiglio, Manuel; Hemberger, Daniel A.
2015-09-01
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic -2Yℓm waveform modes resolved by the NR code up to ℓ=8 . We compare our surrogate model to effective one body waveforms from 50 M⊙ to 300 M⊙ for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).
Multigrid time-accurate integration of Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.
1993-01-01
Efficient acceleration techniques typical of explicit steady-state solvers are extended to time-accurate calculations. Stability restrictions are greatly reduced by means of a fully implicit time discretization. A four-stage Runge-Kutta scheme with local time stepping, residual smoothing, and multigridding is used instead of traditional time-expensive factorizations. Some applications to natural and forced unsteady viscous flows show the capability of the procedure.
Efficient and accurate numerical methods for the Klein-Gordon-Schroedinger equations
Bao, Weizhu . E-mail: bao@math.nus.edu.sg; Yang, Li . E-mail: yangli@nus.edu.sg
2007-08-10
In this paper, we present efficient, unconditionally stable and accurate numerical methods for approximations of the Klein-Gordon-Schroedinger (KGS) equations with/without damping terms. The key features of our methods are based on: (i) the application of a time-splitting spectral discretization for a Schroedinger-type equation in KGS (ii) the utilization of Fourier pseudospectral discretization for spatial derivatives in the Klein-Gordon equation in KGS (iii) the adoption of solving the ordinary differential equations (ODEs) in phase space analytically under appropriate chosen transmission conditions between different time intervals or applying Crank-Nicolson/leap-frog for linear/nonlinear terms for time derivatives. The numerical methods are either explicit or implicit but can be solved explicitly, unconditionally stable, and of spectral accuracy in space and second-order accuracy in time. Moreover, they are time reversible and time transverse invariant when there is no damping terms in KGS, conserve (or keep the same decay rate of) the wave energy as that in KGS without (or with a linear) damping term, keep the same dynamics of the mean value of the meson field, and give exact results for the plane-wave solution. Extensive numerical tests are presented to confirm the above properties of our numerical methods for KGS. Finally, the methods are applied to study solitary-wave collisions in one dimension (1D), as well as dynamics of a 2D problem in KGS.
Integrated optical circuits for numerical computation
NASA Technical Reports Server (NTRS)
Verber, C. M.; Kenan, R. P.
1983-01-01
The development of integrated optical circuits (IOC) for numerical-computation applications is reviewed, with a focus on the use of systolic architectures. The basic architecture criteria for optical processors are shown to be the same as those proposed by Kung (1982) for VLSI design, and the advantages of IOCs over bulk techniques are indicated. The operation and fabrication of electrooptic grating structures are outlined, and the application of IOCs of this type to an existing 32-bit, 32-Mbit/sec digital correlator, a proposed matrix multiplier, and a proposed pipeline processor for polynomial evaluation is discussed. The problems arising from the inherent nonlinearity of electrooptic gratings are considered. Diagrams and drawings of the application concepts are provided.
Keyword Search over Data Service Integration for Accurate Results
NASA Astrophysics Data System (ADS)
Zemleris, Vidmantas; Kuznetsov, Valentin; Gwadera, Robert
2014-06-01
Virtual Data Integration provides a coherent interface for querying heterogeneous data sources (e.g., web services, proprietary systems) with minimum upfront effort. Still, this requires its users to learn a new query language and to get acquainted with data organization which may pose problems even to proficient users. We present a keyword search system, which proposes a ranked list of structured queries along with their explanations. It operates mainly on the metadata, such as the constraints on inputs accepted by services. It was developed as an integral part of the CMS data discovery service, and is currently available as open source.
Accurate Anharmonic IR Spectra from Integrated Cc/dft Approach
NASA Astrophysics Data System (ADS)
Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien; Carnimeo, Ivan; Puzzarini, Cristina
2014-06-01
The recent implementation of the computation of infrared (IR) intensities beyond the double harmonic approximation [1] paved the route to routine calculations of infrared spectra for a wide set of molecular systems. Contrary to common beliefs, second-order perturbation theory is able to deliver results of high accuracy provided that anharmonic resonances are properly managed [1,2]. It has been already shown for several small closed- and open shell molecular systems that the differences between coupled cluster (CC) and DFT anharmonic wavenumbers are mainly due to the harmonic terms, paving the route to introduce effective yet accurate hybrid CC/DFT schemes [2]. In this work we present that hybrid CC/DFT models can be applied also to the IR intensities leading to the simulation of highly accurate fully anharmonic IR spectra for medium-size molecules, including ones of atmospheric interest, showing in all cases good agreement with experiment even in the spectral ranges where non-fundamental transitions are predominant[3]. [1] J. Bloino and V. Barone, J. Chem. Phys. 136, 124108 (2012) [2] V. Barone, M. Biczysko, J. Bloino, Phys. Chem. Chem. Phys., 16, 1759-1787 (2014) [3] I. Carnimeo, C. Puzzarini, N. Tasinato, P. Stoppa, A. P. Charmet, M. Biczysko, C. Cappelli and V. Barone, J. Chem. Phys., 139, 074310 (2013)
NASA Astrophysics Data System (ADS)
Plakhov, Iu. V.; Mytsenko, A. V.; Shel'Pov, V. A.
A numerical integration method is developed that is more accurate than Everhart's (1974) implicit single-sequence approach for integrating orbits. This method can be used to solve problems of space geodesy based on the use of highly precise laser observations.
NASA Technical Reports Server (NTRS)
Hickey, Michael Philip
1988-01-01
A proposed replacement scheme for the integration of the barometric and diffusion equations in the NASA Marshall Engineering Thermosphere (MET) model is presented. This proposed integration scheme is based on Gaussian Quadrature. Extensive numerical testing reveals it to be faster, more accurate and more reliable than the present integration scheme (a modified form of Simpson's Rule) used in the MET model. Numerous graphical examples are provided, along with a listing of a modified form of the MET model in which subroutine INTEGRATE (using Simpson's Rule) is replaced by subroutine GAUSS (which uses Gaussian Quadrature). It is recommended that the Gaussian Quadrature integration scheme, as used here, be used in the MET model.
Numerical integration of asymptotic solutions of ordinary differential equations
NASA Technical Reports Server (NTRS)
Thurston, Gaylen A.
1989-01-01
Classical asymptotic analysis of ordinary differential equations derives approximate solutions that are numerically stable. However, the analysis also leads to tedious expansions in powers of the relevant parameter for a particular problem. The expansions are replaced with integrals that can be evaluated by numerical integration. The resulting numerical solutions retain the linear independence that is the main advantage of asymptotic solutions. Examples, including the Falkner-Skan equation from laminar boundary layer theory, illustrate the method of asymptotic analysis with numerical integration.
On the accuracy of numerical integration over the unit sphere applied to full network models
NASA Astrophysics Data System (ADS)
Itskov, Mikhail
2016-05-01
This paper is motivated by a recent study by Verron (Mecha Mater 89:216-228, 2015) which revealed huge errors of the numerical integration over the unit sphere in application to large strain problems. For the verification of numerical integration schemes we apply here other analytical integrals over the unit sphere which demonstrate much more accurate results. Relative errors of these integrals with respect to corresponding analytical solutions are evaluated also for a full network model of rubber elasticity based on a Padé approximation of the inverse Langevin function as the chain force. According to the results of our study, the numerical integration over the unit sphere can still be considered as a reliable and accurate tool for full network models.
Takahashi, F; Endo, A
2007-01-01
A system utilising radiation transport codes has been developed to derive accurate dose distributions in a human body for radiological accidents. A suitable model is quite essential for a numerical analysis. Therefore, two tools were developed to setup a 'problem-dependent' input file, defining a radiation source and an exposed person to simulate the radiation transport in an accident with the Monte Carlo calculation codes-MCNP and MCNPX. Necessary resources are defined by a dialogue method with a generally used personal computer for both the tools. The tools prepare human body and source models described in the input file format of the employed Monte Carlo codes. The tools were validated for dose assessment in comparison with a past criticality accident and a hypothesized exposure. PMID:17510203
A novel numerical technique to obtain an accurate solution to the Thomas-Fermi equation
NASA Astrophysics Data System (ADS)
Parand, Kourosh; Yousefi, Hossein; Delkhosh, Mehdi; Ghaderi, Amin
2016-07-01
In this paper, a new algorithm based on the fractional order of rational Euler functions (FRE) is introduced to study the Thomas-Fermi (TF) model which is a nonlinear singular ordinary differential equation on a semi-infinite interval. This problem, using the quasilinearization method (QLM), converts to the sequence of linear ordinary differential equations to obtain the solution. For the first time, the rational Euler (RE) and the FRE have been made based on Euler polynomials. In addition, the equation will be solved on a semi-infinite domain without truncating it to a finite domain by taking FRE as basic functions for the collocation method. This method reduces the solution of this problem to the solution of a system of algebraic equations. We demonstrated that the new proposed algorithm is efficient for obtaining the value of y'(0) , y(x) and y'(x) . Comparison with some numerical and analytical solutions shows that the present solution is highly accurate.
Recommendations for accurate numerical blood flow simulations of stented intracranial aneurysms.
Janiga, Gábor; Berg, Philipp; Beuing, Oliver; Neugebauer, Mathias; Gasteiger, Rocco; Preim, Bernhard; Rose, Georg; Skalej, Martin; Thévenin, Dominique
2013-06-01
The number of scientific publications dealing with stented intracranial aneurysms is rapidly increasing. Powerful computational facilities are now available; an accurate computational modeling of hemodynamics in patient-specific configurations is, however, still being sought. Furthermore, there is still no general agreement on the quantities that should be computed and on the most adequate analysis for intervention support. In this article, the accurate representation of patient geometry is first discussed, involving successive improvements. Concerning the second step, the mesh required for the numerical simulation is especially challenging when deploying a stent with very fine wire structures. Third, the description of the fluid properties is a major challenge. Finally, a founded quantitative analysis of the simulation results is obviously needed to support interventional decisions. In the present work, an attempt has been made to review the most important steps for a high-quality computational fluid dynamics computation of virtually stented intracranial aneurysms. In consequence, this leads to concrete recommendations, whereby the obtained results are not discussed for their medical relevance but for the evaluation of their quality. This investigation might hopefully be helpful for further studies considering stent deployment in patient-specific geometries, in particular regarding the generation of the most appropriate computational model. PMID:23729530
Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A
2015-09-18
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases). PMID:26430979
PolyPole-1: An accurate numerical algorithm for intra-granular fission gas release
NASA Astrophysics Data System (ADS)
Pizzocri, D.; Rabiti, C.; Luzzi, L.; Barani, T.; Van Uffelen, P.; Pastore, G.
2016-09-01
The transport of fission gas from within the fuel grains to the grain boundaries (intra-granular fission gas release) is a fundamental controlling mechanism of fission gas release and gaseous swelling in nuclear fuel. Hence, accurate numerical solution of the corresponding mathematical problem needs to be included in fission gas behaviour models used in fuel performance codes. Under the assumption of equilibrium between trapping and resolution, the process can be described mathematically by a single diffusion equation for the gas atom concentration in a grain. In this paper, we propose a new numerical algorithm (PolyPole-1) to efficiently solve the fission gas diffusion equation in time-varying conditions. The PolyPole-1 algorithm is based on the analytic modal solution of the diffusion equation for constant conditions, combined with polynomial corrective terms that embody the information on the deviation from constant conditions. The new algorithm is verified by comparing the results to a finite difference solution over a large number of randomly generated operation histories. Furthermore, comparison to state-of-the-art algorithms used in fuel performance codes demonstrates that the accuracy of PolyPole-1 is superior to other algorithms, with similar computational effort. Finally, the concept of PolyPole-1 may be extended to the solution of the general problem of intra-granular fission gas diffusion during non-equilibrium trapping and resolution, which will be the subject of future work.
NASA Technical Reports Server (NTRS)
Ellison, Donald; Conway, Bruce; Englander, Jacob
2015-01-01
A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.
Earthquake Rupture Dynamics using Adaptive Mesh Refinement and High-Order Accurate Numerical Methods
NASA Astrophysics Data System (ADS)
Kozdon, J. E.; Wilcox, L.
2013-12-01
Our goal is to develop scalable and adaptive (spatial and temporal) numerical methods for coupled, multiphysics problems using high-order accurate numerical methods. To do so, we are developing an opensource, parallel library known as bfam (available at http://bfam.in). The first application to be developed on top of bfam is an earthquake rupture dynamics solver using high-order discontinuous Galerkin methods and summation-by-parts finite difference methods. In earthquake rupture dynamics, wave propagation in the Earth's crust is coupled to frictional sliding on fault interfaces. This coupling is two-way, required the simultaneous simulation of both processes. The use of laboratory-measured friction parameters requires near-fault resolution that is 4-5 orders of magnitude higher than that needed to resolve the frequencies of interest in the volume. This, along with earlier simulations using a low-order, finite volume based adaptive mesh refinement framework, suggest that adaptive mesh refinement is ideally suited for this problem. The use of high-order methods is motivated by the high level of resolution required off the fault in earlier the low-order finite volume simulations; we believe this need for resolution is a result of the excessive numerical dissipation of low-order methods. In bfam spatial adaptivity is handled using the p4est library and temporal adaptivity will be accomplished through local time stepping. In this presentation we will present the guiding principles behind the library as well as verification of code against the Southern California Earthquake Center dynamic rupture code validation test problems.
NASA Astrophysics Data System (ADS)
Chen, Duan; Cai, Wei; Zinser, Brian; Cho, Min Hyung
2016-09-01
In this paper, we develop an accurate and efficient Nyström volume integral equation (VIE) method for the Maxwell equations for a large number of 3-D scatterers. The Cauchy Principal Values that arise from the VIE are computed accurately using a finite size exclusion volume together with explicit correction integrals consisting of removable singularities. Also, the hyper-singular integrals are computed using interpolated quadrature formulae with tensor-product quadrature nodes for cubes, spheres and cylinders, that are frequently encountered in the design of meta-materials. The resulting Nyström VIE method is shown to have high accuracy with a small number of collocation points and demonstrates p-convergence for computing the electromagnetic scattering of these objects. Numerical calculations of multiple scatterers of cubic, spherical, and cylindrical shapes validate the efficiency and accuracy of the proposed method.
Numerical Simulation of the 2004 Indian Ocean Tsunami: Accurate Flooding and drying in Banda Aceh
NASA Astrophysics Data System (ADS)
Cui, Haiyang; Pietrzak, Julie; Stelling, Guus; Androsov, Alexey; Harig, Sven
2010-05-01
The Indian Ocean Tsunami on December 26, 2004 caused one of the largest tsunamis in recent times and led to widespread devastation and loss of life. One of the worst hit regions was Banda Aceh, which is the capital of the Aceh province, located in the northern part of Sumatra, 150km from the source of the earthquake. A German-Indonesian Tsunami Early Warning System (GITEWS) (www.gitews.de) is currently under active development. The work presented here is carried out within the GITEWS framework. One of the aims of this project is the development of accurate models with which to simulate the propagation, flooding and drying, and run-up of a tsunami. In this context, TsunAWI has been developed by the Alfred Wegener Institute; it is an explicit, () finite element model. However, the accurate numerical simulation of flooding and drying requires the conservation of mass and momentum. This is not possible in the current version of TsunAWi. The P1NC - P1element guarantees mass conservation in a global sense, yet as we show here it is important to guarantee mass conservation at the local level, that is within each individual cell. Here an unstructured grid, finite volume ocean model is presented. It is derived from the P1NC - P1 element, and is shown to be mass and momentum conserving. Then a number of simulations are presented, including dam break problems flooding over both a wet and a dry bed. Excellent agreement is found. Then we present simulations for Banda Aceh, and compare the results to on-site survey data, as well as to results from the original TsunAWI code.
Fast and accurate computation of two-dimensional non-separable quadratic-phase integrals.
Koç, Aykut; Ozaktas, Haldun M; Hesselink, Lambertus
2010-06-01
We report a fast and accurate algorithm for numerical computation of two-dimensional non-separable linear canonical transforms (2D-NS-LCTs). Also known as quadratic-phase integrals, this class of integral transforms represents a broad class of optical systems including Fresnel propagation in free space, propagation in graded-index media, passage through thin lenses, and arbitrary concatenations of any number of these, including anamorphic/astigmatic/non-orthogonal cases. The general two-dimensional non-separable case poses several challenges which do not exist in the one-dimensional case and the separable two-dimensional case. The algorithm takes approximately N log N time, where N is the two-dimensional space-bandwidth product of the signal. Our method properly tracks and controls the space-bandwidth products in two dimensions, in order to achieve information theoretically sufficient, but not wastefully redundant, sampling required for the reconstruction of the underlying continuous functions at any stage of the algorithm. Additionally, we provide an alternative definition of general 2D-NS-LCTs that shows its kernel explicitly in terms of its ten parameters, and relate these parameters bidirectionally to conventional ABCD matrix parameters. PMID:20508697
TOPLHA: an accurate and efficient numerical tool for analysis and design of LH antennas
NASA Astrophysics Data System (ADS)
Milanesio, D.; Lancellotti, V.; Meneghini, O.; Maggiora, R.; Vecchi, G.; Bilato, R.
2007-09-01
Auxiliary ICRF heating systems in tokamaks often involve large complex antennas, made up of several conducting straps hosted in distinct cavities that open towards the plasma. The same holds especially true in the LH regime, wherein the antennas are comprised of arrays of many phased waveguides. Upon observing that the various cavities or waveguides couple to each other only through the EM fields existing over the plasma-facing apertures, we self-consistently formulated the EM problem by a convenient set of multiple coupled integral equations. Subsequent application of the Method of Moments yields a highly sparse algebraic system; therefore formal inversion of the system matrix happens to be not so memory demanding, despite the number of unknowns may be quite large (typically 105 or so). The overall strategy has been implemented in an enhanced version of TOPICA (Torino Polytechnic Ion Cyclotron Antenna) and in a newly developed code named TOPLHA (Torino Polytechnic Lower Hybrid Antenna). Both are simulation and prediction tools for plasma facing antennas that incorporate commercial-grade 3D graphic interfaces along with an accurate description of the plasma. In this work we present the new proposed formulation along with examples of application to real life large LH antenna systems.
TOPICA: an accurate and efficient numerical tool for analysis and design of ICRF antennas
NASA Astrophysics Data System (ADS)
Lancellotti, V.; Milanesio, D.; Maggiora, R.; Vecchi, G.; Kyrytsya, V.
2006-07-01
The demand for a predictive tool to help in designing ion-cyclotron radio frequency (ICRF) antenna systems for today's fusion experiments has driven the development of codes such as ICANT, RANT3D, and the early development of TOPICA (TOrino Polytechnic Ion Cyclotron Antenna) code. This paper describes the substantive evolution of TOPICA formulation and implementation that presently allow it to handle the actual geometry of ICRF antennas (with curved, solid straps, a general-shape housing, Faraday screen, etc) as well as an accurate plasma description, accounting for density and temperature profiles and finite Larmor radius effects. The antenna is assumed to be housed in a recess-like enclosure. Both goals have been attained by formally separating the problem into two parts: the vacuum region around the antenna and the plasma region inside the toroidal chamber. Field continuity and boundary conditions allow formulating of a set of two coupled integral equations for the unknown equivalent (current) sources; then the equations are reduced to a linear system by a method of moments solution scheme employing 2D finite elements defined over a 3D non-planar surface triangular-cell mesh. In the vacuum region calculations are done in the spatial (configuration) domain, whereas in the plasma region a spectral (wavenumber) representation of fields and currents is adopted, thus permitting a description of the plasma by a surface impedance matrix. Owing to this approach, any plasma model can be used in principle, and at present the FELICE code has been employed. The natural outcomes of TOPICA are the induced currents on the conductors (antenna, housing, etc) and the electric field in front of the plasma, whence the antenna circuit parameters (impedance/scattering matrices), the radiated power and the fields (at locations other than the chamber aperture) are then obtained. An accurate model of the feeding coaxial lines is also included. The theoretical model and its TOPICA
Error Estimates for Numerical Integration Rules
ERIC Educational Resources Information Center
Mercer, Peter R.
2005-01-01
The starting point for this discussion of error estimates is the fact that integrals that arise in Fourier series have properties that can be used to get improved bounds. This idea is extended to more general situations.
NASA Technical Reports Server (NTRS)
Sidi, A.; Israeli, M.
1986-01-01
High accuracy numerical quadrature methods for integrals of singular periodic functions are proposed. These methods are based on the appropriate Euler-Maclaurin expansions of trapezoidal rule approximations and their extrapolations. They are used to obtain accurate quadrature methods for the solution of singular and weakly singular Fredholm integral equations. Such periodic equations are used in the solution of planar elliptic boundary value problems, elasticity, potential theory, conformal mapping, boundary element methods, free surface flows, etc. The use of the quadrature methods is demonstrated with numerical examples.
Quantum Calisthenics: Gaussians, The Path Integral and Guided Numerical Approximations
Weinstein, Marvin; /SLAC
2009-02-12
It is apparent to anyone who thinks about it that, to a large degree, the basic concepts of Newtonian physics are quite intuitive, but quantum mechanics is not. My purpose in this talk is to introduce you to a new, much more intuitive way to understand how quantum mechanics works. I begin with an incredibly easy way to derive the time evolution of a Gaussian wave-packet for the case free and harmonic motion without any need to know the eigenstates of the Hamiltonian. This discussion is completely analytic and I will later use it to relate the solution for the behavior of the Gaussian packet to the Feynman path-integral and stationary phase approximation. It will be clear that using the information about the evolution of the Gaussian in this way goes far beyond what the stationary phase approximation tells us. Next, I introduce the concept of the bucket brigade approach to dealing with problems that cannot be handled totally analytically. This approach combines the intuition obtained in the initial discussion, as well as the intuition obtained from the path-integral, with simple numerical tools. My goal is to show that, for any specific process, there is a simple Hilbert space interpretation of the stationary phase approximation. I will then argue that, from the point of view of numerical approximations, the trajectory obtained from my generalization of the stationary phase approximation specifies that subspace of the full Hilbert space that is needed to compute the time evolution of the particular state under the full Hamiltonian. The prescription I will give is totally non-perturbative and we will see, by the grace of Maple animations computed for the case of the anharmonic oscillator Hamiltonian, that this approach allows surprisingly accurate computations to be performed with very little work. I think of this approach to the path-integral as defining what I call a guided numerical approximation scheme. After the discussion of the anharmonic oscillator I will
A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction
NASA Technical Reports Server (NTRS)
Bockelie, Michael J.; Eiseman, Peter R.
1990-01-01
A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.
NASA Astrophysics Data System (ADS)
Wosnik, M.; Bachant, P.
2014-12-01
Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of
Numerical evaluation of Feynman path integrals
NASA Astrophysics Data System (ADS)
Baird, William Hugh
1999-11-01
The notion of path integration developed by Feynman, while an incredibly successful method of solving quantum mechanical problems, leads to frequently intractable integrations over an infinite number of paths. Two methods now exist which sidestep this difficulty by defining "densities" of actions which give the relative number of paths found at different values of the action. These densities are sampled by computer generation of paths and the propagators are found to a high degree of accuracy for the case of a particle on the infinite half line and in a finite square well in one dimension. The problem of propagation within a two dimensional radial well is also addressed as the precursor to the problem of a particle in a stadium (quantum billiard).
Numerical integration of ordinary differential equations of various orders
NASA Technical Reports Server (NTRS)
Gear, C. W.
1969-01-01
Report describes techniques for the numerical integration of differential equations of various orders. Modified multistep predictor-corrector methods for general initial-value problems are discussed and new methods are introduced.
A Numerical Study of Hypersonic Forebody/Inlet Integration Problem
NASA Technical Reports Server (NTRS)
Kumar, Ajay
1991-01-01
A numerical study of hypersonic forebody/inlet integration problem is presented in the form of the view-graphs. The following topics are covered: physical/chemical modeling; solution procedure; flow conditions; mass flow rate at inlet face; heating and skin friction loads; 3-D forebogy/inlet integration model; and sensitivity studies.
Translation and integration of numerical atomic orbitals in linear molecules.
Heinäsmäki, Sami
2014-02-14
We present algorithms for translation and integration of atomic orbitals for LCAO calculations in linear molecules. The method applies to arbitrary radial functions given on a numerical mesh. The algorithms are based on pseudospectral differentiation matrices in two dimensions and the corresponding two-dimensional Gaussian quadratures. As a result, multicenter overlap and Coulomb integrals can be evaluated effectively. PMID:24527905
Translation and integration of numerical atomic orbitals in linear molecules
NASA Astrophysics Data System (ADS)
Heinäsmäki, Sami
2014-02-01
We present algorithms for translation and integration of atomic orbitals for LCAO calculations in linear molecules. The method applies to arbitrary radial functions given on a numerical mesh. The algorithms are based on pseudospectral differentiation matrices in two dimensions and the corresponding two-dimensional Gaussian quadratures. As a result, multicenter overlap and Coulomb integrals can be evaluated effectively.
Cobb, J.W.
1995-02-01
There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.
NASA Astrophysics Data System (ADS)
Tang, Xiaojun
2016-04-01
The main purpose of this work is to provide multiple-interval integral Gegenbauer pseudospectral methods for solving optimal control problems. The latest developed single-interval integral Gauss/(flipped Radau) pseudospectral methods can be viewed as special cases of the proposed methods. We present an exact and efficient approach to compute the mesh pseudospectral integration matrices for the Gegenbauer-Gauss and flipped Gegenbauer-Gauss-Radau points. Numerical results on benchmark optimal control problems confirm the ability of the proposed methods to obtain highly accurate solutions.
AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)
A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...
Numerical integration of discontinuities on arbitrary domains based on moment fitting
NASA Astrophysics Data System (ADS)
Joulaian, Meysam; Hubrich, Simeon; Düster, Alexander
2016-03-01
Discretization methods based on meshes that do not conform to the geometry of the problem under consideration require special treatment when it comes to the integration of finite elements that are broken by the boundary or internal interfaces. To this end, we propose a numerical approach suitable for integrating broken elements with a low number of integration points. In this method, which is based on the moment fitting approach, an individual quadrature rule is set up for each cut element. The approach requires a B-rep representation of the broken element, which can be either achieved by processing a triangulated surface obtained from a CAD software or by taking advantage of a voxel model resulting from computed tomography. The numerical examples presented in this paper reveal that the proposed method delivers for a wide variety of geometrical situations very accurate results and requires a rather low number of integration points.
Numerical integration of discontinuities on arbitrary domains based on moment fitting
NASA Astrophysics Data System (ADS)
Joulaian, Meysam; Hubrich, Simeon; Düster, Alexander
2016-06-01
Discretization methods based on meshes that do not conform to the geometry of the problem under consideration require special treatment when it comes to the integration of finite elements that are broken by the boundary or internal interfaces. To this end, we propose a numerical approach suitable for integrating broken elements with a low number of integration points. In this method, which is based on the moment fitting approach, an individual quadrature rule is set up for each cut element. The approach requires a B-rep representation of the broken element, which can be either achieved by processing a triangulated surface obtained from a CAD software or by taking advantage of a voxel model resulting from computed tomography. The numerical examples presented in this paper reveal that the proposed method delivers for a wide variety of geometrical situations very accurate results and requires a rather low number of integration points.
Danshita, Ippei; Polkovnikov, Anatoli
2010-09-01
We study the quantum dynamics of supercurrents of one-dimensional Bose gases in a ring optical lattice to verify instanton methods applied to coherent macroscopic quantum tunneling (MQT). We directly simulate the real-time quantum dynamics of supercurrents, where a coherent oscillation between two macroscopically distinct current states occurs due to MQT. The tunneling rate extracted from the coherent oscillation is compared with that given by the instanton method. We find that the instanton method is quantitatively accurate when the effective Planck's constant is sufficiently small. We also find phase slips associated with the oscillations.
NASA Technical Reports Server (NTRS)
Przekwas, A. J.; Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.
2006-01-01
Detailed information of the flow-fields in the secondary flowpaths and their interaction with the primary flows in gas turbine engines is necessary for successful designs with optimized secondary flow streams. Present work is focused on the development of a simulation methodology for coupled time-accurate solutions of the two flowpaths. The secondary flowstream is treated using SCISEAL, an unstructured adaptive Cartesian grid code developed for secondary flows and seals, while the mainpath flow is solved using TURBO, a density based code with capability of resolving rotor-stator interaction in multi-stage machines. An interface is being tested that links the two codes at the rim seal to allow data exchange between the two codes for parallel, coupled execution. A description of the coupling methodology and the current status of the interface development is presented. Representative steady-state solutions of the secondary flow in the UTRC HP Rig disc cavity are also presented.
Differential-equation-based representation of truncation errors for accurate numerical simulation
NASA Astrophysics Data System (ADS)
MacKinnon, Robert J.; Johnson, Richard W.
1991-09-01
High-order compact finite difference schemes for 2D convection-diffusion-type differential equations with constant and variable convection coefficients are derived. The governing equations are employed to represent leading truncation terms, including cross-derivatives, making the overall O(h super 4) schemes conform to a 3 x 3 stencil. It is shown that the two-dimensional constant coefficient scheme collapses to the optimal scheme for the one-dimensional case wherein the finite difference equation yields nodally exact results. The two-dimensional schemes are tested against standard model problems, including a Navier-Stokes application. Results show that the two schemes are generally more accurate, on comparable grids, than O(h super 2) centered differencing and commonly used O(h) and O(h super 3) upwinding schemes.
Towards more accurate numerical modeling of impedance based high frequency harmonic vibration
NASA Astrophysics Data System (ADS)
Lim, Yee Yan; Kiong Soh, Chee
2014-03-01
The application of smart materials in various fields of engineering has recently become increasingly popular. For instance, the high frequency based electromechanical impedance (EMI) technique employing smart piezoelectric materials is found to be versatile in structural health monitoring (SHM). Thus far, considerable efforts have been made to study and improve the technique. Various theoretical models of the EMI technique have been proposed in an attempt to better understand its behavior. So far, the three-dimensional (3D) coupled field finite element (FE) model has proved to be the most accurate. However, large discrepancies between the results of the FE model and experimental tests, especially in terms of the slope and magnitude of the admittance signatures, continue to exist and are yet to be resolved. This paper presents a series of parametric studies using the 3D coupled field finite element method (FEM) on all properties of materials involved in the lead zirconate titanate (PZT) structure interaction of the EMI technique, to investigate their effect on the admittance signatures acquired. FE model updating is then performed by adjusting the parameters to match the experimental results. One of the main reasons for the lower accuracy, especially in terms of magnitude and slope, of previous FE models is the difficulty in determining the damping related coefficients and the stiffness of the bonding layer. In this study, using the hysteretic damping model in place of Rayleigh damping, which is used by most researchers in this field, and updated bonding stiffness, an improved and more accurate FE model is achieved. The results of this paper are expected to be useful for future study of the subject area in terms of research and application, such as modeling, design and optimization.
Efficient and Accurate Explicit Integration Algorithms with Application to Viscoplastic Models
NASA Technical Reports Server (NTRS)
Arya, Vinod K.
1994-01-01
Several explicit integration algorithms with self-adative time integration strategies are developed and investigated for efficiency and accuracy. These algorithms involve the Runge-Kutta second order, the lower Runge-Kutta method of orders one and two, and the exponential integration method. The algorithms are applied to viscoplastic models put forth by Freed and Verrilli and Bodner and Partom for thermal/mechanical loadings (including tensile, relaxation, and cyclic loadings). The large amount of computations performed showed that, for comparable accuracy, the efficiency of an integration algorithm depends significantly on the type of application (loading). However, in general, for the aforementioned loadings and viscoplastic models, the exponential integration algorithm with the proposed self-adaptive time integration strategy worked more (or comparably) efficiently and accurately than the other integration algorithms. Using this strategy for integrating viscoplastic models may lead to considerable savings in computer time (better efficiency) without adversely affecting the accuracy of the results. This conclusion should encourage the utilization of viscoplastic models in the stress analysis and design of structural components.
TOPLHA: an accurate and efficient numerical tool for analysis and design of LH antennas
NASA Astrophysics Data System (ADS)
Milanesio, D.; Meneghini, O.; Maggiora, R.; Guadamuz, S.; Hillairet, J.; Lancellotti, V.; Vecchi, G.
2012-01-01
This paper presents a self-consistent, integral-equation approach for the analysis of plasma-facing lower hybrid (LH) launchers; the geometry of the waveguide grill structure can be completely arbitrary, including the non-planar mouth of the grill. This work is based on the theoretical approach and code implementation of the TOPICA code, of which it shares the modular structure and constitutes the extension into the LH range. Code results are validated against the literature results and simulations from similar codes.
Kottmann, Jakob S; Höfener, Sebastian; Bischoff, Florian A
2015-12-21
In the present work, we report an efficient implementation of configuration interaction singles (CIS) excitation energies and oscillator strengths using the multi-resolution analysis (MRA) framework to address the basis-set convergence of excited state computations. In MRA (ground-state) orbitals, excited states are constructed adaptively guaranteeing an overall precision. Thus not only valence but also, in particular, low-lying Rydberg states can be computed with consistent quality at the basis set limit a priori, or without special treatments, which is demonstrated using a small test set of organic molecules, basis sets, and states. We find that the new implementation of MRA-CIS excitation energy calculations is competitive with conventional LCAO calculations when the basis-set limit of medium-sized molecules is sought, which requires large, diffuse basis sets. This becomes particularly important if accurate calculations of molecular electronic absorption spectra with respect to basis-set incompleteness are required, in which both valence as well as Rydberg excitations can contribute to the molecule's UV/VIS fingerprint. PMID:25913482
NASA Astrophysics Data System (ADS)
Zheng, Chang-Jun; Gao, Hai-Feng; Du, Lei; Chen, Hai-Bo; Zhang, Chuanzeng
2016-01-01
An accurate numerical solver is developed in this paper for eigenproblems governed by the Helmholtz equation and formulated through the boundary element method. A contour integral method is used to convert the nonlinear eigenproblem into an ordinary eigenproblem, so that eigenvalues can be extracted accurately by solving a set of standard boundary element systems of equations. In order to accelerate the solution procedure, the parameters affecting the accuracy and efficiency of the method are studied and two contour paths are compared. Moreover, a wideband fast multipole method is implemented with a block IDR (s) solver to reduce the overall solution cost of the boundary element systems of equations with multiple right-hand sides. The Burton-Miller formulation is employed to identify the fictitious eigenfrequencies of the interior acoustic problems with multiply connected domains. The actual effect of the Burton-Miller formulation on tackling the fictitious eigenfrequency problem is investigated and the optimal choice of the coupling parameter as α = i / k is confirmed through exterior sphere examples. Furthermore, the numerical eigenvalues obtained by the developed method are compared with the results obtained by the finite element method to show the accuracy and efficiency of the developed method.
Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes
NASA Technical Reports Server (NTRS)
Abrams, D.; Williams, C.
1999-01-01
We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.
Integrated product definition representation for agile numerical control applications
Simons, W.R. Jr.; Brooks, S.L.; Kirk, W.J. III; Brown, C.W.
1994-11-01
Realization of agile manufacturing capabilities for a virtual enterprise requires the integration of technology, management, and work force into a coordinated, interdependent system. This paper is focused on technology enabling tools for agile manufacturing within a virtual enterprise specifically relating to Numerical Control (N/C) manufacturing activities and product definition requirements for these activities.
Monograph - The Numerical Integration of Ordinary Differential Equations.
ERIC Educational Resources Information Center
Hull, T. E.
The materials presented in this monograph are intended to be included in a course on ordinary differential equations at the upper division level in a college mathematics program. These materials provide an introduction to the numerical integration of ordinary differential equations, and they can be used to supplement a regular text on this…
The use of experimental bending tests to more accurate numerical description of TBC damage process
NASA Astrophysics Data System (ADS)
Sadowski, T.; Golewski, P.
2016-04-01
Thermal barrier coatings (TBCs) have been extensively used in aircraft engines to protect critical engine parts such as blades and combustion chambers, which are exposed to high temperatures and corrosive environment. The blades of turbine engines are additionally exposed to high mechanical loads. These loads are created by the high rotational speed of the rotor (30 000 rot/min), causing the tensile and bending stresses. Therefore, experimental testing of coated samples is necessary in order to determine strength properties of TBCs. Beam samples with dimensions 50×10×2 mm were used in those studies. The TBC system consisted of 150 μm thick bond coat (NiCoCrAlY) and 300 μm thick top coat (YSZ) made by APS (air plasma spray) process. Samples were tested by three-point bending test with various loads. After bending tests, the samples were subjected to microscopic observation to determine the quantity of cracks and their depth. The above mentioned results were used to build numerical model and calibrate material data in Abaqus program. Brittle cracking damage model was applied for the TBC layer, which allows to remove elements after reaching criterion. Surface based cohesive behavior was used to model the delamination which may occur at the boundary between bond coat and top coat.
Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration
Masalma, Yahya; Jiao, Yu
2010-10-01
We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.
Numerical integration of ordinary differential equations on manifolds
NASA Astrophysics Data System (ADS)
Crouch, P. E.; Grossman, R.
1993-12-01
This paper is concerned with the problem of developing numerical integration algorithms for differential equations that, when viewed as equations in some Euclidean space, naturally evolve on some embedded submanifold. It is desired to construct algorithms whose iterates also evolve on the same manifold. These algorithms can therefore be viewed as integrating ordinary differential equations on manifolds. The basic method “decouples” the computation of flows on the submanifold from the numerical integration process. It is shown that two classes of single-step and multistep algorithms can be posed and analyzed theoretically, using the concept of “freezing” the coefficients of differential operators obtained from the defining vector field. Explicit third-order algorithms are derived, with additional equations augmenting those of their classical counterparts, obtained from “obstructions” defined by nonvanishing Lie brackets.
Wakeful rest promotes the integration of spatial memories into accurate cognitive maps.
Craig, Michael; Dewar, Michaela; Harris, Mathew A; Della Sala, Sergio; Wolbers, Thomas
2016-02-01
Flexible spatial navigation, e.g. the ability to take novel shortcuts, is contingent upon accurate mental representations of environments-cognitive maps. These cognitive maps critically depend on hippocampal place cells. In rodents, place cells replay recently travelled routes, especially during periods of behavioural inactivity (sleep/wakeful rest). This neural replay is hypothesised to promote not only the consolidation of specific experiences, but also their wider integration, e.g. into accurate cognitive maps. In humans, rest promotes the consolidation of specific experiences, but the effect of rest on the wider integration of memories remained unknown. In the present study, we examined the hypothesis that cognitive map formation is supported by rest-related integration of new spatial memories. We predicted that if wakeful rest supports cognitive map formation, then rest should enhance knowledge of overarching spatial relations that were never experienced directly during recent navigation. Forty young participants learned a route through a virtual environment before either resting wakefully or engaging in an unrelated perceptual task for 10 min. Participants in the wakeful rest condition performed more accurately in a delayed cognitive map test, requiring the pointing to landmarks from a range of locations. Importantly, the benefit of rest could not be explained by active rehearsal, but can be attributed to the promotion of consolidation-related activity. These findings (i) resonate with the demonstration of hippocampal replay in rodents, and (ii) provide the first evidence that wakeful rest can improve the integration of new spatial memories in humans, a function that has, hitherto, been associated with sleep. PMID:26235141
Ensemble-type numerical uncertainty information from single model integrations
Rauser, Florian Marotzke, Jochem; Korn, Peter
2015-07-01
We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of the influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.
Erguel, Ozguer; Guerel, Levent
2008-12-01
We present a novel stabilization procedure for accurate surface formulations of electromagnetic scattering problems involving three-dimensional dielectric objects with arbitrarily low contrasts. Conventional surface integral equations provide inaccurate results for the scattered fields when the contrast of the object is low, i.e., when the electromagnetic material parameters of the scatterer and the host medium are close to each other. We propose a stabilization procedure involving the extraction of nonradiating currents and rearrangement of the right-hand side of the equations using fictitious incident fields. Then, only the radiating currents are solved to calculate the scattered fields accurately. This technique can easily be applied to the existing implementations of conventional formulations, it requires negligible extra computational cost, and it is also appropriate for the solution of large problems with the multilevel fast multipole algorithm. We show that the stabilization leads to robust formulations that are valid even for the solutions of extremely low-contrast objects.
Stability of numerical integration techniques for transient rotor dynamics
NASA Technical Reports Server (NTRS)
Kascak, A. F.
1977-01-01
A finite element model of a rotor bearing system was analyzed to determine the stability limits of the forward, backward, and centered Euler; Runge-Kutta; Milne; and Adams numerical integration techniques. The analysis concludes that the highest frequency mode determines the maximum time step for a stable solution. Thus, the number of mass elements should be minimized. Increasing the damping can sometimes cause numerical instability. For a uniform shaft, with 10 mass elements, operating at approximately the first critical speed, the maximum time step for the Runge-Kutta, Milne, and Adams methods is that which corresponds to approximately 1 degree of shaft movement. This is independent of rotor dimensions.
Bangalore, Sai Santosh; Wang, Jelai; Allison, David B.
2009-01-01
In the fields of genomics and high dimensional biology (HDB), massive multiple testing prompts the use of extremely small significance levels. Because tail areas of statistical distributions are needed for hypothesis testing, the accuracy of these areas is important to confidently make scientific judgments. Previous work on accuracy was primarily focused on evaluating professionally written statistical software, like SAS, on the Statistical Reference Datasets (StRD) provided by National Institute of Standards and Technology (NIST) and on the accuracy of tail areas in statistical distributions. The goal of this paper is to provide guidance to investigators, who are developing their own custom scientific software built upon numerical libraries written by others. In specific, we evaluate the accuracy of small tail areas from cumulative distribution functions (CDF) of the Chi-square and t-distribution by comparing several open-source, free, or commercially licensed numerical libraries in Java, C, and R to widely accepted standards of comparison like ELV and DCDFLIB. In our evaluation, the C libraries and R functions are consistently accurate up to six significant digits. Amongst the evaluated Java libraries, Colt is most accurate. These languages and libraries are popular choices among programmers developing scientific software, so the results herein can be useful to programmers in choosing libraries for CDF accuracy. PMID:20161126
Path Integrals and Exotic Options:. Methods and Numerical Results
NASA Astrophysics Data System (ADS)
Bormetti, G.; Montagna, G.; Moreni, N.; Nicrosini, O.
2005-09-01
In the framework of Black-Scholes-Merton model of financial derivatives, a path integral approach to option pricing is presented. A general formula to price path dependent options on multidimensional and correlated underlying assets is obtained and implemented by means of various flexible and efficient algorithms. As an example, we detail the case of Asian call options. The numerical results are compared with those obtained with other procedures used in quantitative finance and found to be in good agreement. In particular, when pricing at the money (ATM) and out of the money (OTM) options, path integral exhibits competitive performances.
An accurate spline polynomial cubature formula for double integration with logarithmic singularity
NASA Astrophysics Data System (ADS)
Bichi, Sirajo Lawan; Eshkuvatov, Z. K.; Long, N. M. A. Nik; Bello, M. Y.
2016-06-01
The paper studied the integration of logarithmic singularity problem J (y ¯)= ∬ ∇ζ (y ¯)l o g |y ¯-y¯0*|d A , where y ¯=(α ,β ), y¯0=(α0,β0) the domain ∇ is rectangle ∇ = [r1, r2] × [r3, r4], the arbitrary point y ¯∈∇ and the fixed point y¯0∈∇. The given density function ζ(y ¯), is smooth on the rectangular domain ∇ and is in the functions class C2,τ (∇). Cubature formula (CF) for double integration with logarithmic singularities (LS) on a rectangle ∇ is constructed by applying type (0, 2) modified spline function DΓ(P). The results obtained by testing the density functions ζ(y ¯) as linear and absolute value functions shows that the constructed CF is highly accurate.
NASA Astrophysics Data System (ADS)
He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu
2014-11-01
Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.
NASA Astrophysics Data System (ADS)
Stecca, Guglielmo; Siviglia, Annunziato; Blom, Astrid
2016-07-01
We present an accurate numerical approximation to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in one space dimension. Our solution procedure originates from the fully-unsteady matrix-vector formulation developed in [54]. The principal part of the problem is solved by an explicit Finite Volume upwind method of the path-conservative type, by which all the variables are updated simultaneously in a coupled fashion. The solution to the principal part is embedded into a splitting procedure for the treatment of frictional source terms. The numerical scheme is extended to second-order accuracy and includes a bookkeeping procedure for handling the evolution of size stratification in the substrate. We develop a concept of balancedness for the vertical mass flux between the substrate and active layer under bed degradation, which prevents the occurrence of non-physical oscillations in the grainsize distribution of the substrate. We suitably modify the numerical scheme to respect this principle. We finally verify the accuracy in our solution to the equations, and its ability to reproduce one-dimensional morphodynamics due to streamwise and vertical sorting, using three test cases. In detail, (i) we empirically assess the balancedness of vertical mass fluxes under degradation; (ii) we study the convergence to the analytical linearised solution for the propagation of infinitesimal-amplitude waves [54], which is here employed for the first time to assess a mixed-sediment model; (iii) we reproduce Ribberink's E8-E9 flume experiment [46].
Simple and Efficient Numerical Evaluation of Near-Hypersingular Integrals
NASA Technical Reports Server (NTRS)
Fink, Patrick W.; Wilton, Donald R.; Khayat, Michael A.
2007-01-01
Recently, significant progress has been made in the handling of singular and nearly-singular potential integrals that commonly arise in the Boundary Element Method (BEM). To facilitate object-oriented programming and handling of higher order basis functions, cancellation techniques are favored over techniques involving singularity subtraction. However, gradients of the Newton-type potentials, which produce hypersingular kernels, are also frequently required in BEM formulations. As is the case with the potentials, treatment of the near-hypersingular integrals has proven more challenging than treating the limiting case in which the observation point approaches the surface. Historically, numerical evaluation of these near-hypersingularities has often involved a two-step procedure: a singularity subtraction to reduce the order of the singularity, followed by a boundary contour integral evaluation of the extracted part. Since this evaluation necessarily links basis function, Green s function, and the integration domain (element shape), the approach ill fits object-oriented programming concepts. Thus, there is a need for cancellation-type techniques for efficient numerical evaluation of the gradient of the potential. Progress in the development of efficient cancellation-type procedures for the gradient potentials was recently presented. To the extent possible, a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. However, since the gradient kernel involves singularities of different orders, we also require that the transformation leaves remaining terms that are analytic. The terms "normal" and "tangential" are used herein with reference to the source element. Also, since computational formulations often involve the numerical evaluation of both potentials and their gradients, it is highly desirable that a single integration procedure efficiently handles both.
A two-dimensional depth-integrated non-hydrostatic numerical model for nearshore wave propagation
NASA Astrophysics Data System (ADS)
Lu, Xinhua; Dong, Bingjiang; Mao, Bing; Zhang, Xiaofeng
2015-12-01
In this study, we develop a shallow-water depth-integrated non-hydrostatic numerical model (SNH model) using a hybrid finite-volume and finite-difference method. Numerical discretization is performed using the non-incremental pressure-correction method on a collocated grid. We demonstrate that an extension can easily be made from an existing finite-volume method and collocated-grid based hydrostatic shallow-water equations (SWE) model to a non-hydrostatic model. A series of benchmark tests are used to validate the proposed numerical model. Our results demonstrate that the proposed model is robust and well-balanced, and it captures the wet-dry fronts accurately. A comparison between the SNH and SWE models indicates the importance of considering the wave dispersion effect in simulations when the wave amplitude to water depth ratio is large.
Integrating Numerical Groundwater Modeling Results With Geographic Information Systems
NASA Astrophysics Data System (ADS)
Witkowski, M. S.; Robinson, B. A.; Linger, S. P.
2001-12-01
Many different types of data are used to create numerical models of flow and transport of groundwater in the vadose zone. Results from water balance studies, infiltration models, hydrologic properties, and digital elevation models (DEMs) are examples of such data. Because input data comes in a variety of formats, for consistency the data need to be assembled in a coherent fashion on a single platform. Through the use of a geographic information system (GIS), all data sources can effectively be integrated on one platform to store, retrieve, query, and display data. In our vadoze zone modeling studies in support of Los Alamos National Laboratory's Environmental Restoration Project, we employ a GIS comprised of a Raid storage device, an Oracle database, ESRI's spatial database engine (SDE), ArcView GIS, and custom GIS tools for three-dimensional (3D) analysis. We store traditional GIS data, such as, contours, historical building footprints, and study area locations, as points, lines, and polygons with attributes. Numerical flow and transport model results from the Finite Element Heat and Mass Transfer Code (FEHM) are stored as points with attributes, such as fluid saturation, or pressure, or contaminant concentration at a given location. We overlay traditional types of GIS data with numerical model results, thereby allowing us to better build conceptual models and perform spatial analyses. We have also developed specialized analysis tools to assist in the data and model analysis process. This approach provides an integrated framework for performing tasks such as comparing the model to data and understanding the relationship of model predictions to existing contaminant source locations and water supply wells. Our process of integrating GIS and numerical modeling results allows us to answer a wide variety of questions about our conceptual model design: - Which set of locations should be identified as contaminant sources based on known historical building operations
Accurate integral equation theory for the central force model of liquid water and ionic solutions
NASA Astrophysics Data System (ADS)
Ichiye, Toshiko; Haymet, A. D. J.
1988-10-01
The atom-atom pair correlation functions and thermodynamics of the central force model of water, introduced by Lemberg, Stillinger, and Rahman, have been calculated accurately by an integral equation method which incorporates two new developments. First, a rapid new scheme has been used to solve the Ornstein-Zernike equation. This scheme combines the renormalization methods of Allnatt, and Rossky and Friedman with an extension of the trigonometric basis-set solution of Labik and co-workers. Second, by adding approximate ``bridge'' functions to the hypernetted-chain (HNC) integral equation, we have obtained predictions for liquid water in which the hydrogen bond length and number are in good agreement with ``exact'' computer simulations of the same model force laws. In addition, for dilute ionic solutions, the ion-oxygen and ion-hydrogen coordination numbers display both the physically correct stoichiometry and good agreement with earlier simulations. These results represent a measurable improvement over both a previous HNC solution of the central force model and the ex-RISM integral equation solutions for the TIPS and other rigid molecule models of water.
Development of highly accurate approximate scheme for computing the charge transfer integral.
Pershin, Anton; Szalay, Péter G
2015-08-21
The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the "exact" scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the "exact" calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature. PMID:26298117
Development of highly accurate approximate scheme for computing the charge transfer integral
Pershin, Anton; Szalay, Péter G.
2015-08-21
The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the “exact” scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the “exact” calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.
Comparison of integrated numerical experiments with accelerator and FEL experiments
Thode, L.E.; Carlsten, B.E.; Chan, K.C.D.; Cooper, R.K.; Elliott, J.C.; Gitomer, S.J.; Goldstein, J.C.; Jones, M.E.; McVey, B.D.; Schmitt, M.J.; Takeda, H.; Tokar, R.L.; Wang, T.S.; Young, L.M.
1991-01-01
Even at the conceptual level the strong coupling between the laser subsystem elements, such as the accelerator, wiggler, optics, and control, greatly complicates the understanding and design of an FEL. Given the requirements for a high-performance FEL, the coupling between the laser subsystems must be included in the design approach. To address the subsystem coupling the concept of an integrated numerical experiment (INEX) has been implemented. Unique features of the INEX approach are consistency and numerical equivalence of experimental diagnostic. The equivalent numerical diagnostics mitigates the major problem of misinterpretation that often occurs when theoretical and experimental data are compared. A complete INEX model has been applied to the 10{mu}m high-extraction-efficiency experiment at Los Alamos and the 0.6-{mu}m Burst Mode experiment at Boeing Aerospace. In addition, various subsets of the INEX model have been compared with a number of other experiments. Overall, the agreement between INEX and the experiments is very good. With the INEX approach, it now appears possible to design high-performance FELS for numerous applications. The first full-scale test of the INEX approach is the Los Alamos HIBAF experiment. The INEX concept, implementation, and validation with experiments are discussed. 28 refs., 13 figs., 1 tab.
An Improved Numerical Integration Method for Springback Predictions
NASA Astrophysics Data System (ADS)
Ibrahim, R.; Smith, L. M.; Golovashchenko, Sergey F.
2011-08-01
In this investigation, the focus is on the springback of steel sheets in V-die air bending. A full replication to a numerical integration algorithm presented rigorously in [1] to predict the springback in air bending was performed and confirmed successfully. Algorithm alteration and extensions were proposed here. The altered approach used in solving the moment equation numerically resulted in springback values much closer to the trend presented by the experimental data, Although investigation here extended to use a more realistic work-hardening model, the differences in the springback values obtained by both hardening models were almost negligible. The algorithm was extended to be applied on thin sheets down to 0.8 mm. Results show that this extension is possible as verified by FEA and other published experiments on TRIP steel sheets.
INEX (integrated numerical experiment) simulations of the Boeing FEL system
Tokar, R.L.; Young, L.M.; Lumpkin, A.H.; McVey, B.D.; Thode, L.E.; Bender, S.C.; Chan, K.C.D. ); Yeremian, A.D.; Dowell, D.H.; Lowrey, A.R. )
1989-01-01
The INEX (integrated numerical experiment) numerical model is applied to the 0.6 {mu}m FEL oscillator at Boeing Aerospace and Electronics Company in Seattle, WA. This system consists of a 110 MeV L-band rf linac, a beam transport line from the accelerator to the entrance of the wiggler, the 5.0 meter THUNDER variable taper wiggler, and a near concentric two mirror optical oscillator. Many aspects of the model for the electron beam accelerator and transport line agree with experimental measurements. Predictions for lasing performance are compared with data obtained in May and June 1989 using a mild tapered wiggler. We obtain good agreement with the achieved extraction efficiency, while 1D pulse simulations reproduce the observed sideband instability. 15 refs., 11 figs.
Singularity Preserving Numerical Methods for Boundary Integral Equations
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki (Principal Investigator)
1996-01-01
In the past twelve months (May 8, 1995 - May 8, 1996), under the cooperative agreement with Division of Multidisciplinary Optimization at NASA Langley, we have accomplished the following five projects: a note on the finite element method with singular basis functions; numerical quadrature for weakly singular integrals; superconvergence of degenerate kernel method; superconvergence of the iterated collocation method for Hammersteion equations; and singularity preserving Galerkin method for Hammerstein equations with logarithmic kernel. This final report consists of five papers describing these projects. Each project is preceeded by a brief abstract.
Numerical integration techniques for curved-element discretizations of molecule-solvent interfaces.
Bardhan, Jaydeep P; Altman, Michael D; Willis, David J; Lippow, Shaun M; Tidor, Bruce; White, Jacob K
2007-07-01
Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, here methods were developed to model several important surface formulations using exact surface discretizations. Following and refining Zauhar's work [J. Comput.-Aided Mol. Des. 9, 149 (1995)], two classes of curved elements were defined that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. Numerical integration techniques are presented that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, a set of calculations are presented that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planar-triangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute-solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved
Trigonometrically fitted two step hybrid method for the numerical integration of second order IVPs
NASA Astrophysics Data System (ADS)
Monovasilis, Th.; Kalogiratou, Z.; Simos, T. E.
2016-06-01
In this work we consider the numerical integration of second order ODEs where the first derivative is missing. We construct trigonometrically fitted two step hybrid methods. We apply the new methods on the numerical integration of several test problems.
Accurate quantification of diffusion and binding kinetics of non-integral membrane proteins by FRAP.
Berkovich, Ronen; Wolfenson, Haguy; Eisenberg, Sharon; Ehrlich, Marcelo; Weiss, Matthias; Klafter, Joseph; Henis, Yoav I; Urbakh, Michael
2011-11-01
Non-integral membrane proteins frequently act as transduction hubs in vital signaling pathways initiated at the plasma membrane (PM). Their biological activity depends on dynamic interactions with the PM, which are governed by their lateral and cytoplasmic diffusion and membrane binding/unbinding kinetics. Accurate quantification of the multiple kinetic parameters characterizing their membrane interaction dynamics has been challenging. Despite a fair number of approximate fitting functions for analyzing fluorescence recovery after photobleaching (FRAP) data, no approach was able to cope with the full diffusion-exchange problem. Here, we present an exact solution and matlab fitting programs for FRAP with a stationary Gaussian laser beam, allowing simultaneous determination of the membrane (un)binding rates and the diffusion coefficients. To reduce the number of fitting parameters, the cytoplasmic diffusion coefficient is determined separately. Notably, our equations include the dependence of the exchange kinetics on the distribution of the measured protein between the PM and the cytoplasm, enabling the derivation of both k(on) and k(off) without prior assumptions. After validating the fitting function by computer simulations, we confirm the applicability of our approach to live-cell data by monitoring the dynamics of GFP-N-Ras mutants under conditions with different contributions of lateral diffusion and exchange to the FRAP kinetics. PMID:21810156
Comparison of four stable numerical methods for Abel's integral equation
NASA Technical Reports Server (NTRS)
Murio, Diego A.; Mejia, Carlos E.
1991-01-01
The 3-D image reconstruction from cone-beam projections in computerized tomography leads naturally, in the case of radial symmetry, to the study of Abel-type integral equations. If the experimental information is obtained from measured data, on a discrete set of points, special methods are needed in order to restore continuity with respect to the data. A new combined Regularized-Adjoint-Conjugate Gradient algorithm, together with two different implementations of the Mollification Method (one based on a data filtering technique and the other on the mollification of the kernal function) and a regularization by truncation method (initially proposed for 2-D ray sample schemes and more recently extended to 3-D cone-beam image reconstruction) are extensively tested and compared for accuracy and numerical stability as functions of the level of noise in the data.
Quantitative evaluation of numerical integration schemes for Lagrangian particle dispersion models
NASA Astrophysics Data System (ADS)
Ramli, Huda Mohd.; Esler, J. Gavin
2016-07-01
A rigorous methodology for the evaluation of integration schemes for Lagrangian particle dispersion models (LPDMs) is presented. A series of one-dimensional test problems are introduced, for which the Fokker-Planck equation is solved numerically using a finite-difference discretisation in physical space and a Hermite function expansion in velocity space. Numerical convergence errors in the Fokker-Planck equation solutions are shown to be much less than the statistical error associated with a practical-sized ensemble (N = 106) of LPDM solutions; hence, the former can be used to validate the latter. The test problems are then used to evaluate commonly used LPDM integration schemes. The results allow for optimal time-step selection for each scheme, given a required level of accuracy. The following recommendations are made for use in operational models. First, if computational constraints require the use of moderate to long time steps, it is more accurate to solve the random displacement model approximation to the LPDM rather than use existing schemes designed for long time steps. Second, useful gains in numerical accuracy can be obtained, at moderate additional computational cost, by using the relatively simple "small-noise" scheme of Honeycutt.
NASA Technical Reports Server (NTRS)
Hu, Fang Q.
1994-01-01
It is known that the exact analytic solutions of wave scattering by a circular cylinder, when they exist, are not in a closed form but in infinite series which converges slowly for high frequency waves. In this paper, we present a fast number solution for the scattering problem in which the boundary integral equations, reformulated from the Helmholtz equation, are solved using a Fourier spectral method. It is shown that the special geometry considered here allows the implementation of the spectral method to be simple and very efficient. The present method differs from previous approaches in that the singularities of the integral kernels are removed and dealt with accurately. The proposed method preserves the spectral accuracy and is shown to have an exponential rate of convergence. Aspects of efficient implementation using FFT are discussed. Moreover, the boundary integral equations of combined single and double-layer representation are used in the present paper. This ensures the uniqueness of the numerical solution for the scattering problem at all frequencies. Although a strongly singular kernel is encountered for the Neumann boundary conditions, we show that the hypersingularity can be handled easily in the spectral method. Numerical examples that demonstrate the validity of the method are also presented.
NASA Astrophysics Data System (ADS)
Wang, Shyh-Wei; Guo, Shuang-Fa
1998-01-01
New techniques for more accurate and efficient simulation of ion implantations by a stepwise numerical integration of the Boltzmann transport equation (BTE) have been developed in this work. Instead of using uniform energy grid, a non-uniform grid is employed to construct the momentum distribution matrix. A more accurate simulation result is obtained for heavy ions implanted into silicon. In the same time, rather than utilizing the conventional Lindhard, Nielsen and Schoitt (LNS) approximation, an exact evaluation of the integrals involving the nuclear differential scattering cross-section (dσn=2πp dp) is proposed. The impact parameter p as a function of ion energy E and scattering angle φ is obtained by solving the magic formula iteratively and an interpolation techniques is devised during the simulation process. The simulation time using exact evaluation is about 3.5 times faster than that using the Littmark and Ziegler (LZ) spline fitted cross-section function for phosphorus implantation into silicon.
Carbon Dioxide Dispersion in the Combustion Integrated Rack Simulated Numerically
NASA Technical Reports Server (NTRS)
Wu, Ming-Shin; Ruff, Gary A.
2004-01-01
When discharged into an International Space Station (ISS) payload rack, a carbon dioxide (CO2) portable fire extinguisher (PFE) must extinguish a fire by decreasing the oxygen in the rack by 50 percent within 60 sec. The length of time needed for this oxygen reduction throughout the rack and the length of time that the CO2 concentration remains high enough to prevent the fire from reigniting is important when determining the effectiveness of the response and postfire procedures. Furthermore, in the absence of gravity, the local flow velocity can make the difference between a fire that spreads rapidly and one that self-extinguishes after ignition. A numerical simulation of the discharge of CO2 from PFE into the Combustion Integrated Rack (CIR) in microgravity was performed to obtain the local velocity and CO2 concentration. The complicated flow field around the PFE nozzle exits was modeled by sources of equivalent mass and momentum flux at a location downstream of the nozzle. The time for the concentration of CO2 to reach a level that would extinguish a fire anywhere in the rack was determined using the Fire Dynamics Simulator (FDS), a computational fluid dynamics code developed by the National Institute of Standards and Technology specifically to evaluate the development of a fire and smoke transport. The simulation shows that CO2, as well as any smoke and combustion gases produced by a fire, would be discharged into the ISS cabin through the resource utility panel at the bottom of the rack. These simulations will be validated by comparing the results with velocity and CO2 concentration measurements obtained during the fire suppression system verification tests conducted on the CIR in March 2003. Once these numerical simulations are validated, portions of the ISS labs and living areas will be modeled to determine the local flow conditions before, during, and after a fire event. These simulations can yield specific information about how long it takes for smoke and
Black shale weathering: An integrated field and numerical modeling study
NASA Astrophysics Data System (ADS)
Bolton, E. W.; Wildman, R. A., Jr.; Berner, R. A.; Eckert, J. O., Jr.; Petsch, S. T.; Mok, U.; Evans, B.
2003-04-01
We present an integrated study of black shale weathering in a near surface environment. Implications of this study contribute to our understanding of organic matter oxidation in uplifted sediments, along with erosion and reburial of ancient unoxidized organic matter, as major controls on atmospheric oxygen levels over geologic time. The field study used to launch the modeling effort is based on core samples from central-eastern Kentucky near Clay City (Late Devonian New Albany/Ohio Shale), where the strata are essentially horizontal. Samples from various depth intervals (up to 12 m depth) were analyzed for texture (SEM images), porosity fraction (0.02 to 0.1), and horizontal and vertical permeability (water and air permeabilities differ due to the fine-grained nature of the sediments, but are on the order of 0.01 to 1. millidarcies, respectively). Chemical analyses were also performed for per cent C, N, S, and basic mineralogy was determined (clays, quartz, pyrite, in addition to organic matter). The samples contained from 2 to 15 per cent ancient (non-modern soil) organic matter. These results were used in the creation of a numerical model for kinetically controlled oxidation of the organic matter within the shale (based on kinetics from Chang and Berner, 1999). The one-dimensional model includes erosion, oxygen diffusion in the partially saturated vadose zone as well as water percolation and solute transport. This study extends the studies of Petsch (2000) and the weathering component of Lasaga and Ohmoto (2002) to include more reactions (e.g., pyrite oxidation to sulfuric acid and weathering of silicates due to low pH) and to resolve the near-surface boundary layer. The model provides a convenient means of exploring the influence of variable rates of erosion, oxygen level, rainfall, as well as physical and chemical characteristics of the shale on organic matter oxidation.
Tobing, L Y M; Tjahjana, L; Darmawan, S; Zhang, D H
2012-02-27
Coupling induced effects are higher order effects inherent in waveguide evanescent coupling that are known to spectrally distort optical performances of integrated optics devices formed by coupled resonators. We present both numerical and experimental studies of coupling-induced phase shift in various basic integrated optics devices. Rigorous finite difference time domain simulations and systematic experimental characterizations of different basic structures were conducted for more accurate parameter extraction, where it can be observed that coupling induced wave vector may change sign at the increasing gap separation. The devices characterized in this work were fabricated by CMOS-process 193 nm Deep UV (DUV) lithography in silicon-on-insulator (SOI) technology. PMID:22418385
Data Integrity: Why Aren't the Data Accurate? AIR 1989 Annual Forum Paper.
ERIC Educational Resources Information Center
Gose, Frank J.
The accuracy and reliability aspects of data integrity are discussed, with an emphasis on the need for consistency in responsibility and authority. A variety of ways in which data integrity can be compromised are discussed. The following sources of data corruption are described, and the ease or difficulty of identification and suggested actions…
Integrating Numerical Computation into the Modeling Instruction Curriculum
ERIC Educational Resources Information Center
Caballero, Marcos D.; Burk, John B.; Aiken, John M.; Thoms, Brian D.; Douglas, Scott S.; Scanlon, Erin M.; Schatz, Michael F.
2014-01-01
Numerical computation (the use of a computer to solve, simulate, or visualize a physical problem) has fundamentally changed the way scientific research is done. Systems that are too difficult to solve in closed form are probed using computation. Experiments that are impossible to perform in the laboratory are studied numerically. Consequently, in…
Applying integrals of motion to the numerical solution of differential equations
NASA Technical Reports Server (NTRS)
Jezewski, D. J.
1979-01-01
A method is developed for using the integrals of systems of nonlinear, ordinary differential equations in a numerical integration process to control the local errors in these integrals and reduce the global errors of the solution. The method is general and can be applied to either scaler or vector integrals. A number of example problems, with accompanying numerical results, are used to verify the analysis and support the conjecture of global error reduction.
Numerical Integration with GeoGebra in High School
ERIC Educational Resources Information Center
Herceg, Dorde; Herceg, Dragoslav
2010-01-01
The concept of definite integral is almost always introduced as the Riemann integral, which is defined in terms of the Riemann sum, and its geometric interpretation. This definition is hard to understand for high school students. With the aid of mathematical software for visualisation and computation of approximate integrals, the notion of…
A simple and accurate algorithm for path integral molecular dynamics with the Langevin thermostat.
Liu, Jian; Li, Dezhang; Liu, Xinzijian
2016-07-14
We introduce a novel simple algorithm for thermostatting path integral molecular dynamics (PIMD) with the Langevin equation. The staging transformation of path integral beads is employed for demonstration. The optimum friction coefficients for the staging modes in the free particle limit are used for all systems. In comparison to the path integral Langevin equation thermostat, the new algorithm exploits a different order of splitting for the phase space propagator associated to the Langevin equation. While the error analysis is made for both algorithms, they are also employed in the PIMD simulations of three realistic systems (the H2O molecule, liquid para-hydrogen, and liquid water) for comparison. It is shown that the new thermostat increases the time interval of PIMD by a factor of 4-6 or more for achieving the same accuracy. In addition, the supplementary material shows the error analysis made for the algorithms when the normal-mode transformation of path integral beads is used. PMID:27421393
A simple and accurate algorithm for path integral molecular dynamics with the Langevin thermostat
NASA Astrophysics Data System (ADS)
Liu, Jian; Li, Dezhang; Liu, Xinzijian
2016-07-01
We introduce a novel simple algorithm for thermostatting path integral molecular dynamics (PIMD) with the Langevin equation. The staging transformation of path integral beads is employed for demonstration. The optimum friction coefficients for the staging modes in the free particle limit are used for all systems. In comparison to the path integral Langevin equation thermostat, the new algorithm exploits a different order of splitting for the phase space propagator associated to the Langevin equation. While the error analysis is made for both algorithms, they are also employed in the PIMD simulations of three realistic systems (the H2O molecule, liquid para-hydrogen, and liquid water) for comparison. It is shown that the new thermostat increases the time interval of PIMD by a factor of 4-6 or more for achieving the same accuracy. In addition, the supplementary material shows the error analysis made for the algorithms when the normal-mode transformation of path integral beads is used.
NASA Astrophysics Data System (ADS)
Moore, Christopher; Hopkins, Matthew; Moore, Stan; Boerner, Jeremiah; Cartwright, Keith
2015-09-01
Simulation of breakdown is important for understanding and designing a variety of applications such as mitigating undesirable discharge events. Such simulations need to be accurate through early time arc initiation to late time stable arc behavior. Here we examine constraints on the timestep and mesh size required for arc simulations using the particle-in-cell (PIC) method with direct simulation Monte Carlo (DMSC) collisions. Accurate simulation of electron avalanche across a fixed voltage drop and constant neutral density (reduced field of 1000 Td) was found to require a timestep ~ 1/100 of the mean time between collisions and a mesh size ~ 1/25 the mean free path. These constraints are much smaller than the typical PIC-DSMC requirements for timestep and mesh size. Both constraints are related to the fact that charged particles are accelerated by the external field. Thus gradients in the electron energy distribution function can exist at scales smaller than the mean free path and these must be resolved by the mesh size for accurate collision rates. Additionally, the timestep must be small enough that the particle energy change due to the fields be small in order to capture gradients in the cross sections versus energy. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
Implicit numerical integration for periodic solutions of autonomous nonlinear systems
NASA Technical Reports Server (NTRS)
Thurston, G. A.
1982-01-01
A change of variables that stabilizes numerical computations for periodic solutions of autonomous systems is derived. Computation of the period is decoupled from the rest of the problem for conservative systems of any order and for any second-order system. Numerical results are included for a second-order conservative system under a suddenly applied constant load. Near the critical load for the system, a small increment in load amplitude results in a large increase in amplitude of the response.
Multidimensional Genome-wide Analyses Show Accurate FVIII Integration by ZFN in Primary Human Cells
Sivalingam, Jaichandran; Kenanov, Dimitar; Han, Hao; Nirmal, Ajit Johnson; Ng, Wai Har; Lee, Sze Sing; Masilamani, Jeyakumar; Phan, Toan Thang; Maurer-Stroh, Sebastian; Kon, Oi Lian
2016-01-01
Costly coagulation factor VIII (FVIII) replacement therapy is a barrier to optimal clinical management of hemophilia A. Therapy using FVIII-secreting autologous primary cells is potentially efficacious and more affordable. Zinc finger nucleases (ZFN) mediate transgene integration into the AAVS1 locus but comprehensive evaluation of off-target genome effects is currently lacking. In light of serious adverse effects in clinical trials which employed genome-integrating viral vectors, this study evaluated potential genotoxicity of ZFN-mediated transgenesis using different techniques. We employed deep sequencing of predicted off-target sites, copy number analysis, whole-genome sequencing, and RNA-seq in primary human umbilical cord-lining epithelial cells (CLECs) with AAVS1 ZFN-mediated FVIII transgene integration. We combined molecular features to enhance the accuracy and activity of ZFN-mediated transgenesis. Our data showed a low frequency of ZFN-associated indels, no detectable off-target transgene integrations or chromosomal rearrangements. ZFN-modified CLECs had very few dysregulated transcripts and no evidence of activated oncogenic pathways. We also showed AAVS1 ZFN activity and durable FVIII transgene secretion in primary human dermal fibroblasts, bone marrow- and adipose tissue-derived stromal cells. Our study suggests that, with close attention to the molecular design of genome-modifying constructs, AAVS1 ZFN-mediated FVIII integration in several primary human cell types may be safe and efficacious. PMID:26689265
Park, Seongchong; Hong, Kee-Suk; Kim, Wan-Seop
2016-03-20
This work introduces a switched integration amplifier (SIA)-based photocurrent meter for femtoampere (fA)-level current measurement, which enables us to measure a 10^{7} dynamic range of spectral responsivity of photometers even with a common lamp-based monochromatic light source. We described design considerations and practices about operational amplifiers (op-amps), switches, readout methods, etc., to compose a stable SIA of low offset current in terms of leakage current and gain peaking in detail. According to the design, we made six SIAs of different integration capacitance and different op-amps and evaluated their offset currents. They showed an offset current of (1.5-85) fA with a slow variation of (0.5-10) fA for an hour under opened input. Applying a detector to the SIA input, the offset current and its variation were increased and the SIA readout became noisier due to finite shunt resistance and nonzero shunt capacitance of the detector. One of the SIAs with 10 pF nominal capacitance was calibrated using a calibrated current source at the current level of 10 nA to 1 fA and at the integration time of 2 to 65,536 ms. As a result, we obtained a calibration formula for integration capacitance as a function of integration time rather than a single capacitance value because the SIA readout showed a distinct dependence on integration time at a given current level. Finally, we applied it to spectral responsivity measurement of a photometer. It is demonstrated that the home-made SIA of 10 pF was capable of measuring a 10^{7} dynamic range of spectral responsivity of a photometer. PMID:27140564
Faghih Shojaei, M; Mohammadi, V; Rajabi, H; Darvizeh, A
2012-12-01
In this paper, a new numerical technique is presented to accurately model the geometrical and mechanical features of mollusk shells as a three dimensional (3D) integrated volume. For this purpose, the Newton method is used to solve the nonlinear equations of shell surfaces. The points of intersection on the shell surface are identified and the extra interior parts are removed. Meshing process is accomplished with respect to the coordinate of each point of intersection. The final 3D generated mesh models perfectly describe the spatial configuration of the mollusk shells. Moreover, the computational model perfectly matches with the actual interior geometry of the shells as well as their exterior architecture. The direct generation technique is employed to generate a 3D finite element (FE) model in ANSYS 11. X-ray images are taken to show the close similarity of the interior geometry of the models and the actual samples. A scanning electron microscope (SEM) is used to provide information on the microstructure of the shells. In addition, a set of compression tests were performed on gastropod shell specimens to obtain their ultimate compressive strength. A close agreement between experimental data and the relevant numerical results is demonstrated. PMID:23137621
Numerical validation of MR-measurement-integrated simulation of blood flow in a cerebral aneurysm.
Funamoto, Kenichi; Suzuki, Yoshitsugu; Hayase, Toshiyuki; Kosugi, Takashi; Isoda, Haruo
2009-06-01
This study proposes magnetic resonance (MR)-measurement-integrated (MR-MI) simulation, in which the difference between the computed velocity field and the phase-contrast MRI measurement data is fed back to the numerical simulation. The computational accuracy and the fundamental characteristics, such as steady characteristics and transient characteristics, of the MR-MI simulation were investigated by a numerical experiment. We dealt with reproduction of three-dimensional steady and unsteady blood flow fields in a realistic cerebral aneurysm developed at a bifurcation. The MR-MI simulation reduced the error derived from the incorrect boundary conditions in the blood flow in the cerebral aneurysm. For the reproduction of steady and unsteady standard solutions, the error of velocity decreased to 13% and to 22% in one cardiac cycle, respectively, compared with the ordinary simulation without feedback. Moreover, the application of feedback shortened the computational convergence, and thus the convergent solution and periodic solution were obtained within less computational time in the MR-MI simulation than that in the ordinary simulation. The dividing flow ratio toward the two outlets after bifurcation was well estimated owing to the improvement of computational accuracy. Furthermore, the MR-MI simulation yielded wall shear stress distribution on the cerebral aneurysm of the standard solution accurately and in detail. PMID:19350390
Towards more accurate life cycle risk management through integration of DDP and PRA
NASA Technical Reports Server (NTRS)
Cornford, Steven L.; Paulos, Todd; Meshkat, Leila; Feather, Martin
2003-01-01
The focus of this paper is on the integration of PRA and DDP. The intent is twofold: to extend risk-based decision though more of the lifecycle, and to lead to improved risk modeling (hence better informed decision making) wherever it is applied, most especially in the early phases as designs begin to mature.
Multi Sensor Data Integration for AN Accurate 3d Model Generation
NASA Astrophysics Data System (ADS)
Chhatkuli, S.; Satoh, T.; Tachibana, K.
2015-05-01
The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other's weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.
NASA Astrophysics Data System (ADS)
Hedrick, A. R.; Marks, D. G.; Winstral, A. H.; Marshall, H. P.
2014-12-01
The ability to forecast snow water equivalent, or SWE, in mountain catchments would benefit many different communities ranging from avalanche hazard mitigation to water resource management. Historical model runs of Isnobal, the physically based energy balance snow model, have been produced over the 2150 km2 Boise River Basin for water years 2012 - 2014 at 100-meter resolution. Spatially distributed forcing parameters such as precipitation, wind, and relative humidity are generated from automated weather stations located throughout the watershed, and are supplied to Isnobal at hourly timesteps. Similarly, the Weather Research & Forecasting (WRF) Model provides hourly predictions of the same forcing parameters from an atmospheric physics perspective. This work aims to quantitatively compare WRF model output to the spatial meteorologic fields developed to force Isnobal, with the hopes of eventually using WRF predictions to create accurate hourly forecasts of SWE over a large mountainous basin.
The numerical integration and 3-D finite element formulation of a viscoelastic model of glass
Chambers, R.S.
1994-08-01
The use of glasses is widespread in making hermetic, insulating seals for many electronic components. Flat panel displays and fiber optic connectors are other products utilizing glass as a structural element. When glass is cooled from sealing temperatures, residual stresses are generated due to mismatches in thermal shrinkage created by the dissimilar material properties of the adjoining materials. Because glass is such a brittle material at room temperature, tensile residual stresses must be kept small to ensure durability and avoid cracking. Although production designs and the required manufacturing process development can be deduced empirically, this is an expensive and time consuming process that does not necessarily lead to an optimal design. Agile manufacturing demands that analyses be used to reduce development costs and schedules by providing insight and guiding the design process through the development cycle. To make these gains, however, viscoelastic models of glass must be available along with the right tool to use them. A viscoelastic model of glass can be used to simulate the stress and volume relaxation that occurs at elevated temperatures as the molecular structure of the glass seeks to equilibrate to the state of the supercooled liquid. The substance of the numerical treatment needed to support the implementation of the model in a 3-D finite element program is presented herein. An accurate second-order, central difference integrator is proposed for the constitutive equations, and numerical solutions are compared to those obtained with other integrators. Inherent convergence problems are reviewed and fixes are described. The resulting algorithms are generally applicable to the broad class of viscoelastic material models. First-order error estimates are used as a basis for developing a scheme for automatic time step controls, and several demonstration problems are presented to illustrate the performance of the methodology.
Construction of the two-electron contribution to the Fock matrix by numerical integration
NASA Astrophysics Data System (ADS)
Losilla, Sergio A.; Mehine, Mooses M.; Sundholm, Dage
2012-10-01
A novel method to numerically calculate the Fock matrix is presented. The Coulomb operator is re-expressed as an integral identity, which is discretized. The discretization of the auxiliary t dimension separates the x, y, and z dependencies transforming the two-electron Coulomb integrals of Gaussian-type orbitals (GTO) to a linear sum of products of two-dimensional integrals. The s-type integrals are calculated analytically and integrals of the higher angular-momentum functions are obtained using recursion formulae. The contributions to the two-body Coulomb integrals obtained for each discrete t value can be evaluated independently. The two-body Fock matrix elements can be integrated numerically, using common sets of quadrature points and weights. The aim is to calculate Fock matrices of enough accuracy for electronic structure calculations. Preliminary calculations indicate that it is possible to achieve an overall accuracy of at least 10-12 E h using the numerical approach.
Enabling fast, stable and accurate peridynamic computations using multi-time-step integration
Lindsay, P.; Parks, M. L.; Prakash, A.
2016-04-13
Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less
NASA Astrophysics Data System (ADS)
Roig, Jaume; Stefanov, Evgueniy; Morancho, Frédéric
2007-07-01
The use of super-junction (SJ) techniques in PIN photodiodes is proposed in this letter for the first time with the objective to assist the optoelectronic integrated circuits (OEICs) implementation in complementary metal oxide semiconductor (CMOS), bipolar CMOS (BiCMOS) and bipolar-CMOS-double diffused MOS (BCD) technologies. Its technological viability is also discussed to make it credible as an alternative to other OEICs approaches. Numerical simulation of realistic SJ-PIN devices, widely used in high power electronics, demonstrates the possibility to integrate high-performance CMOS-based OEICs in epitaxial layers with doping concentrations above 1× 1015 cm-3. The induced lateral depletion at low reverse biased voltage, assisted by the alternated N and P-doped pillars, allows high-speed transient response in SJ-PIN detecting wavelengths between 400 and 800 nm. Moreover, other important parameters as the responsivity and the dark current are not degraded in respect to the conventional PIN (C-PIN) structures.
Wang, Yi; Lu, James; Yu, Jin; Gibbs, Richard A; Yu, Fuli
2013-05-01
Next-generation sequencing is a powerful approach for discovering genetic variation. Sensitive variant calling and haplotype inference from population sequencing data remain challenging. We describe methods for high-quality discovery, genotyping, and phasing of SNPs for low-coverage (approximately 5×) sequencing of populations, implemented in a pipeline called SNPTools. Our pipeline contains several innovations that specifically address challenges caused by low-coverage population sequencing: (1) effective base depth (EBD), a nonparametric statistic that enables more accurate statistical modeling of sequencing data; (2) variance ratio scoring, a variance-based statistic that discovers polymorphic loci with high sensitivity and specificity; and (3) BAM-specific binomial mixture modeling (BBMM), a clustering algorithm that generates robust genotype likelihoods from heterogeneous sequencing data. Last, we develop an imputation engine that refines raw genotype likelihoods to produce high-quality phased genotypes/haplotypes. Designed for large population studies, SNPTools' input/output (I/O) and storage aware design leads to improved computing performance on large sequencing data sets. We apply SNPTools to the International 1000 Genomes Project (1000G) Phase 1 low-coverage data set and obtain genotyping accuracy comparable to that of SNP microarray. PMID:23296920
Zhao, Li; Chen, Yiyun; Bajaj, Amol Onkar; Eblimit, Aiden; Xu, Mingchu; Soens, Zachry T; Wang, Feng; Ge, Zhongqi; Jung, Sung Yun; He, Feng; Li, Yumei; Wensel, Theodore G; Qin, Jun; Chen, Rui
2016-05-01
Proteomic profiling on subcellular fractions provides invaluable information regarding both protein abundance and subcellular localization. When integrated with other data sets, it can greatly enhance our ability to predict gene function genome-wide. In this study, we performed a comprehensive proteomic analysis on the light-sensing compartment of photoreceptors called the outer segment (OS). By comparing with the protein profile obtained from the retina tissue depleted of OS, an enrichment score for each protein is calculated to quantify protein subcellular localization, and 84% accuracy is achieved compared with experimental data. By integrating the protein OS enrichment score, the protein abundance, and the retina transcriptome, the probability of a gene playing an essential function in photoreceptor cells is derived with high specificity and sensitivity. As a result, a list of genes that will likely result in human retinal disease when mutated was identified and validated by previous literature and/or animal model studies. Therefore, this new methodology demonstrates the synergy of combining subcellular fractionation proteomics with other omics data sets and is generally applicable to other tissues and diseases. PMID:26912414
Liu, Lili; Zhang, Zijun; Mei, Qian; Chen, Ming
2013-01-01
Predicting the subcellular localization of proteins conquers the major drawbacks of high-throughput localization experiments that are costly and time-consuming. However, current subcellular localization predictors are limited in scope and accuracy. In particular, most predictors perform well on certain locations or with certain data sets while poorly on others. Here, we present PSI, a novel high accuracy web server for plant subcellular localization prediction. PSI derives the wisdom of multiple specialized predictors via a joint-approach of group decision making strategy and machine learning methods to give an integrated best result. The overall accuracy obtained (up to 93.4%) was higher than best individual (CELLO) by ~10.7%. The precision of each predicable subcellular location (more than 80%) far exceeds that of the individual predictors. It can also deal with multi-localization proteins. PSI is expected to be a powerful tool in protein location engineering as well as in plant sciences, while the strategy employed could be applied to other integrative problems. A user-friendly web server, PSI, has been developed for free access at http://bis.zju.edu.cn/psi/. PMID:24194827
Lippert, Ross A; Predescu, Cristian; Ierardi, Douglas J; Mackenzie, Kenneth M; Eastwood, Michael P; Dror, Ron O; Shaw, David E
2013-10-28
In molecular dynamics simulations, control over temperature and pressure is typically achieved by augmenting the original system with additional dynamical variables to create a thermostat and a barostat, respectively. These variables generally evolve on timescales much longer than those of particle motion, but typical integrator implementations update the additional variables along with the particle positions and momenta at each time step. We present a framework that replaces the traditional integration procedure with separate barostat, thermostat, and Newtonian particle motion updates, allowing thermostat and barostat updates to be applied infrequently. Such infrequent updates provide a particularly substantial performance advantage for simulations parallelized across many computer processors, because thermostat and barostat updates typically require communication among all processors. Infrequent updates can also improve accuracy by alleviating certain sources of error associated with limited-precision arithmetic. In addition, separating the barostat, thermostat, and particle motion update steps reduces certain truncation errors, bringing the time-average pressure closer to its target value. Finally, this framework, which we have implemented on both general-purpose and special-purpose hardware, reduces software complexity and improves software modularity. PMID:24182003
Numerical integration of population models satisfying conservation laws: NSFD methods.
Mickens, Ronald E
2007-10-01
Population models arising in ecology, epidemiology and mathematical biology may involve a conservation law, i.e. the total population is constant. In addition to these cases, other situations may occur for which the total population, asymptotically in time, approach a constant value. Since it is rarely the situation that the equations of motion can be analytically solved to obtain exact solutions, it follows that numerical techniques are needed to provide solutions. However, numerical procedures are only valid if they can reproduce fundamental properties of the differential equations modeling the phenomena of interest. We show that for population models, involving a dynamical conservation law the use of nonstandard finite difference (NSFD) methods allows the construction of discretization schemes such that they are dynamically consistent (DC) with the original differential equations. The paper will briefly discuss the NSFD methodology, the concept of DC, and illustrate their application to specific problems for population models. PMID:22876826
Impact of numerical integration on gas curtain simulations
Rider, W.; Kamm, J.
2000-11-01
In recent years, we have presented a less than glowing experimental comparison of hydrodynamic codes with the gas curtain experiment (e.g., Kamm et al. 1999a). Here, we discuss the manner in which the details of the hydrodynamic integration techniques may conspire to produce poor results. This also includes some progress in improving the results and agreement with experimental results. Because our comparison was conducted on the details of the experimental images (i.e., their detailed structural information), our results do not conflict with previously published results of good agreement with Richtmyer-Meshkov instabilities based on the integral scale of mixing. New experimental and analysis techniques are also discussed.
NASA Astrophysics Data System (ADS)
Toldo, R.; Fantini, F.; Giona, L.; Fantoni, S.; Fusiello, A.
2013-02-01
A novel multi-view stereo reconstruction method is presented. The algorithm is focused on accuracy and it is highly engineered with some parts taking advantage of the graphics processing unit. In addition, it is seamlessly integrated with the output of a structure and motion pipeline. In the first part of the algorithm a depth map is extracted independently for each image. The final depth map is generated from the depth hypothesis using a Markov random field optimization technique over the image grid. An octree data structure accumulates the votes coming from each depth map. A novel procedure to remove rogue points is proposed that takes into account the visibility information and the matching score of each point. Finally a texture map is built by wisely making use of both the visibility and the view angle informations. Several results show the effectiveness of the algorithm under different working scenarios.
Numerical implications of stabilization by the use of integrals
NASA Technical Reports Server (NTRS)
Beaudet, P. R.
1975-01-01
Liapunov or energy restraint methods for dynamic stabilization in two body motion perturbation problems are considered. Results of computerized orbital stabilization estimates show that the application of energy restraint prevents the occurrence of consistent timing errors in the stepwise integration of equations of motion for a nearly circular orbit.
Daeva, S.G.; Setukha, A.V.
2015-03-10
A numerical method for solving a problem of diffraction of acoustic waves by system of solid and thin objects based on the reduction the problem to a boundary integral equation in which the integral is understood in the sense of finite Hadamard value is proposed. To solve this equation we applied piecewise constant approximations and collocation methods numerical scheme. The difference between the constructed scheme and earlier known is in obtaining approximate analytical expressions to appearing system of linear equations coefficients by separating the main part of the kernel integral operator. The proposed numerical scheme is tested on the solution of the model problem of diffraction of an acoustic wave by inelastic sphere.
Advances in numerical solutions to integral equations in liquid state theory
NASA Astrophysics Data System (ADS)
Howard, Jesse J.
Solvent effects play a vital role in the accurate description of the free energy profile for solution phase chemical and structural processes. The inclusion of solvent effects in any meaningful theoretical model however, has proven to be a formidable task. Generally, methods involving Poisson-Boltzmann (PB) theory and molecular dynamic (MD) simulations are used, but they either fail to accurately describe the solvent effects or require an exhaustive computation effort to overcome sampling problems. An alternative to these methods are the integral equations (IEs) of liquid state theory which have become more widely applicable due to recent advancements in the theory of interaction site fluids and the numerical methods to solve the equations. In this work a new numerical method is developed based on a Newton-type scheme coupled with Picard/MDIIS routines. To extend the range of these numerical methods to large-scale data systems, the size of the Jacobian is reduced using basis functions, and the Newton steps are calculated using a GMRes solver. The method is then applied to calculate solutions to the 3D reference interaction site model (RISM) IEs of statistical mechanics, which are derived from first principles, for a solute model of a pair of parallel graphene plates at various separations in pure water. The 3D IEs are then extended to electrostatic models using an exact treatment of the long-range Coulomb interactions for negatively charged walls and DNA duplexes in aqueous electrolyte solutions to calculate the density profiles and solution thermodynamics. It is found that the 3D-IEs provide a qualitative description of the density distributions of the solvent species when compared to MD results, but at a much reduced computational effort in comparison to MD simulations. The thermodynamics of the solvated systems are also qualitatively reproduced by the IE results. The findings of this work show the IEs to be a valuable tool for the study and prediction of
Integrated numerical methods for hypersonic aircraft cooling systems analysis
NASA Technical Reports Server (NTRS)
Petley, Dennis H.; Jones, Stuart C.; Dziedzic, William M.
1992-01-01
Numerical methods have been developed for the analysis of hypersonic aircraft cooling systems. A general purpose finite difference thermal analysis code is used to determine areas which must be cooled. Complex cooling networks of series and parallel flow can be analyzed using a finite difference computer program. Both internal fluid flow and heat transfer are analyzed, because increased heat flow causes a decrease in the flow of the coolant. The steady state solution is a successive point iterative method. The transient analysis uses implicit forward-backward differencing. Several examples of the use of the program in studies of hypersonic aircraft and rockets are provided.
EZ-Rhizo: integrated software for the fast and accurate measurement of root system architecture.
Armengaud, Patrick; Zambaux, Kevin; Hills, Adrian; Sulpice, Ronan; Pattison, Richard J; Blatt, Michael R; Amtmann, Anna
2009-03-01
The root system is essential for the growth and development of plants. In addition to anchoring the plant in the ground, it is the site of uptake of water and minerals from the soil. Plant root systems show an astonishing plasticity in their architecture, which allows for optimal exploitation of diverse soil structures and conditions. The signalling pathways that enable plants to sense and respond to changes in soil conditions, in particular nutrient supply, are a topic of intensive research, and root system architecture (RSA) is an important and obvious phenotypic output. At present, the quantitative description of RSA is labour intensive and time consuming, even using the currently available software, and the lack of a fast RSA measuring tool hampers forward and quantitative genetics studies. Here, we describe EZ-Rhizo: a Windows-integrated and semi-automated computer program designed to detect and quantify multiple RSA parameters from plants growing on a solid support medium. The method is non-invasive, enabling the user to follow RSA development over time. We have successfully applied EZ-Rhizo to evaluate natural variation in RSA across 23 Arabidopsis thaliana accessions, and have identified new RSA determinants as a basis for future quantitative trait locus (QTL) analysis. PMID:19000163
iCut: an Integrative Cut Algorithm Enables Accurate Segmentation of Touching Cells
He, Yong; Gong, Hui; Xiong, Benyi; Xu, Xiaofeng; Li, Anan; Jiang, Tao; Sun, Qingtao; Wang, Simin; Luo, Qingming; Chen, Shangbin
2015-01-01
Individual cells play essential roles in the biological processes of the brain. The number of neurons changes during both normal development and disease progression. High-resolution imaging has made it possible to directly count cells. However, the automatic and precise segmentation of touching cells continues to be a major challenge for massive and highly complex datasets. Thus, an integrative cut (iCut) algorithm, which combines information regarding spatial location and intervening and concave contours with the established normalized cut, has been developed. iCut involves two key steps: (1) a weighting matrix is first constructed with the abovementioned information regarding the touching cells and (2) a normalized cut algorithm that uses the weighting matrix is implemented to separate the touching cells into isolated cells. This novel algorithm was evaluated using two types of data: the open SIMCEP benchmark dataset and our micro-optical imaging dataset from a Nissl-stained mouse brain. It has achieved a promising recall/precision of 91.2 ± 2.1%/94.1 ± 1.8% and 86.8 ± 4.1%/87.5 ± 5.7%, respectively, for the two datasets. As quantified using the harmonic mean of recall and precision, the accuracy of iCut is higher than that of some state-of-the-art algorithms. The better performance of this fully automated algorithm can benefit studies of brain cytoarchitecture. PMID:26168908
Boriskina, Svetlana V; Sewell, Phillip; Benson, Trevor M; Nosich, Alexander I
2004-03-01
A fast and accurate method is developed to compute the natural frequencies and scattering characteristics of arbitrary-shape two-dimensional dielectric resonators. The problem is formulated in terms of a uniquely solvable set of second-kind boundary integral equations and discretized by the Galerkin method with angular exponents as global test and trial functions. The log-singular term is extracted from one of the kernels, and closed-form expressions are derived for the main parts of all the integral operators. The resulting discrete scheme has a very high convergence rate. The method is used in the simulation of several optical microcavities for modern dense wavelength-division-multiplexed systems. PMID:15005404
Minesaki, Yukitaka
2013-08-01
For the restricted three-body problem, we propose an accurate orbital integration scheme that retains all conserved quantities of the two-body problem with two primaries and approximately preserves the Jacobi integral. The scheme is obtained by taking the limit as mass approaches zero in the discrete-time general three-body problem. For a long time interval, the proposed scheme precisely reproduces various periodic orbits that cannot be accurately computed by other generic integrators.
Luo, Longqiang; Li, Dingfang; Zhang, Wen; Tu, Shikui; Zhu, Xiaopeng; Tian, Gang
2016-01-01
Background Piwi-interacting RNA (piRNA) is the largest class of small non-coding RNA molecules. The transposon-derived piRNA prediction can enrich the research contents of small ncRNAs as well as help to further understand generation mechanism of gamete. Methods In this paper, we attempt to differentiate transposon-derived piRNAs from non-piRNAs based on their sequential and physicochemical features by using machine learning methods. We explore six sequence-derived features, i.e. spectrum profile, mismatch profile, subsequence profile, position-specific scoring matrix, pseudo dinucleotide composition and local structure-sequence triplet elements, and systematically evaluate their performances for transposon-derived piRNA prediction. Finally, we consider two approaches: direct combination and ensemble learning to integrate useful features and achieve high-accuracy prediction models. Results We construct three datasets, covering three species: Human, Mouse and Drosophila, and evaluate the performances of prediction models by 10-fold cross validation. In the computational experiments, direct combination models achieve AUC of 0.917, 0.922 and 0.992 on Human, Mouse and Drosophila, respectively; ensemble learning models achieve AUC of 0.922, 0.926 and 0.994 on the three datasets. Conclusions Compared with other state-of-the-art methods, our methods can lead to better performances. In conclusion, the proposed methods are promising for the transposon-derived piRNA prediction. The source codes and datasets are available in S1 File. PMID:27074043
Integrating Data from Several Remotely Sensed Platforms to Accurately Map Wetlands
NASA Astrophysics Data System (ADS)
Corcoran, Jennifer Marie
Traditional wetland mapping methods are in need of modernization. They typically depend solely on few dates of optical imagery, cloud-free data acquisition, and therefore surface features of interest are often obstructed, inaccurately mapped, or not present during data acquisition. Despite the limitations of data acquisition during cloud-free daylight only, multi-temporal multi-spectral optical data are still highly valuable for mapping wetlands and classifying wetland type. However, radar sensors are unique in that they are insensitive to atmospheric and low light conditions, and thus can offer more consistent multi-temporal image acquisition. Unique characteristics about surface scattering mechanisms, such as saturated extent of wetlands, can be found by utilizing both the intensity and phase information from multiple polarizations and multiple wavelengths of radar data. In addition, information from lidar can reveal important details about the variability and structure of surface features, and the potentiality of water to collect in certain areas. The research presented in this dissertation will show important developments in wetland mapping by integrating several platforms of remotely sensed data, including: two sources of radar data including fully polarimetric RADARSAT-2 data (C-band) and dual-pol PALSAR data (L-band); two sources of optical data including Landsat TM imagery and aerial orthophotos, lidar point cloud data with intensity and derived topographic indices. Decision tree classification using the random forest model will be utilized to take advantage of the unique differences in these data. Assessments of outputs from random forest will be used to identify the most significant data sources for two levels of land cover classification: discriminating between water, wetland and upland areas, and sub-classifying wetland type. It is expected that results from this research will deliver a valuable, affordable, and practical wetland probability tool to aid
Zhao, Minghua; Liu, Yonghong; Feng, Yaning; Zhang, Ming; He, Lifeng; Suzuki, Kenji
2016-01-01
Accurate lung segmentation is an essential step in developing a computer-aided lung disease diagnosis system. However, because of the high variability of computerized tomography (CT) images, it remains a difficult task to accurately segment lung tissue in CT slices using a simple strategy. Motived by the aforementioned, a novel CT lung segmentation method based on the integration of multiple strategies was proposed in this paper. Firstly, in order to avoid noise, the input CT slice was smoothed using the guided filter. Then, the smoothed slice was transformed into a binary image using an optimized threshold. Next, a region growing strategy was employed to extract thorax regions. Then, lung regions were segmented from the thorax regions using a seed-based random walk algorithm. The segmented lung contour was then smoothed and corrected with a curvature-based correction method on each axis slice. Finally, with the lung masks, the lung region was automatically segmented from a CT slice. The proposed method was validated on a CT database consisting of 23 scans, including a number of 883 2D slices (the number of slices per scan is 38 slices), by comparing it to the commonly used lung segmentation method. Experimental results show that the proposed method accurately segmented lung regions in CT slices.
An Integrated Numerical Hydrodynamic Shallow Flow-Solute Transport Model for Urban Area
NASA Astrophysics Data System (ADS)
Alias, N. A.; Mohd Sidek, L.
2016-03-01
The rapidly changing on land profiles in the some urban areas in Malaysia led to the increasing of flood risk. Extensive developments on densely populated area and urbanization worsen the flood scenario. An early warning system is really important and the popular method is by numerically simulating the river and flood flows. There are lots of two-dimensional (2D) flood model predicting the flood level but in some circumstances, still it is difficult to resolve the river reach in a 2D manner. A systematic early warning system requires a precisely prediction of flow depth. Hence a reliable one-dimensional (1D) model that provides accurate description of the flow is essential. Research also aims to resolve some of raised issues such as the fate of pollutant in river reach by developing the integrated hydrodynamic shallow flow-solute transport model. Presented in this paper are results on flow prediction for Sungai Penchala and the convection-diffusion of solute transports simulated by the developed model.
Theoretical study of the partial derivatives produced by numerical integration of satellite orbits.
NASA Astrophysics Data System (ADS)
Hadjifotinou, K. G.; Ichtiaroglou, S.
1997-06-01
For the two-body system Saturn-Mimas and the theoretical three-body non-resonant system Saturn-Mimas-Tethys we present a theoretical analysis of the behaviour of the partial derivatives of the satellites' coordinates with respect to the parameters of the system, namely the satellites' initial conditions and their mass-ratios over Saturn. With the use of Floquet theory for the stability of periodic orbits we prove that all the partial derivatives have amplitudes that increase linearly with time. Their motion is a combination of periodic motions the periods of which can also be accurately predicted by the theory. This theoretical model can be used for checking the accuracy of the results of the different numerical integration methods used on satellite systems with the purpose of fitting the results to observations or analytical theories. On this basis, in the last part of the paper we extend the investigation of Hadjifotinou & Harper (1995A&A...303..940H) on the stability and efficience of the 10^th^-order Gauss-Jackson backward difference and the Runge-Kutta-Nystroem RKN12(10)17M methods by now applying them to the above mentioned three-body system.
Jung, Hee-Jung; Purvine, Samuel O.; Kim, Hokeun; Petyuk, Vladislav A.; Hyung, Seok-Won; Monroe, Matthew E.; Mun, Dong-Gi; Kim, Kyong-Chul; Park, Jong-Moon; Kim, Su-Jin; Tolic, Nikola; Slysz, Gordon W.; Moore, Ronald J.; Zhao, Rui; Adkins, Joshua N.; Anderson, Gordon A.; Lee, Hookeun; Camp, David G.; Yu, Myeong-Hee; Smith, Richard D.; Lee, Sang-Won
2010-01-01
Accurate assignment of monoisotopic precursor masses to tandem mass spectrometric (MS/MS) data is a fundamental and critically important step for successful peptide identifications in mass spectrometry based proteomics. Here we describe an integrated approach that combines three previously reported methods of treating MS/MS data for precursor mass refinement. This combined method, “integrated Post-Experiment Monoisotopic Mass Refinement” (iPE-MMR), integrates steps: 1) generation of refined MS/MS data by DeconMSn; 2) additional refinement of the resultant MS/MS data by a modified version of PE-MMR; 3) elimination of systematic errors of precursor masses using DtaRefinery. iPE-MMR is the first method that utilizes all MS information from multiple MS scans of a precursor ion including multiple charge states, in an MS scan, to determine precursor mass. By combining these methods, iPE-MMR increases sensitivity in peptide identification and provides increased accuracy when applied to complex high-throughput proteomics data. PMID:20863060
NASA Astrophysics Data System (ADS)
Zhang, Ningyu; Cheng, Chuanfu; Teng, Shuyun; Chen, Xiaoyi; Xu, Zhizhan
2007-09-01
A new approach based on the gated integration technique is proposed for the accurate measurement of the autocorrelation function of speckle intensities scattered from a random phase screen. The Boxcar used for this technique in the acquisition of the speckle intensity data integrates the photoelectric signal during its sampling gate open, and it repeats the sampling by a preset number, m. The average analog of the m samplings output by the Boxcar enhances the signal-to-noise ratio by √{m}, because the repeated sampling and the average make the useful speckle signals stable, while the randomly varied photoelectric noise is suppressed by 1/√{m}. In the experiment, we use an analog-to-digital converter module to synchronize all the actions such as the stepped movement of the phase screen, the repeated sampling, the readout of the averaged output of the Boxcar, etc. The experimental results show that speckle signals are better recovered from contaminated signals, and the autocorrelation function with the secondary maximum is obtained, indicating that the accuracy of the measurement of the autocorrelation function is greatly improved by the gated integration technique.
An efficient step-size control method in numerical integration for astrodynamical equations
NASA Astrophysics Data System (ADS)
Liu, C. Z.; Cui, D. X.
2002-11-01
Using the curvature of the integral curve, a step-size control method is introduced in this paper. This method will prove to be the efficient scheme in the sense that it saves computation time and improve accuracy of numerical integration.
Integrated numeric and symbolic signal processing using a heterogeneous design environment
NASA Astrophysics Data System (ADS)
Mani, Ramamurthy; Nawab, S. Hamid; Winograd, Joseph M.; Evans, Brian L.
1996-10-01
We present a solution to a complex multi-tone transient detection problem to illustrate the integrated use of symbolic and numeric processing techniques which are supported by well-established underlying models. Examples of such models include synchronous dataflow for numeric processing and the blackboard paradigm for symbolic heuristic search. Our transient detection solution serves to emphasize the importance of developing system design methods and tools which can support the integrated use of well- established symbolic and numerical models of computation. Recently, we incorporated a blackboard-based model of computation underlying the Integrated Processing and Understanding of Signals (IPUS) paradigm into a system-level design environment for numeric processing called Ptolemy. Using the IPUS/Ptolemy environment, we are implementing our solution to the multi-tone transient detection problem.
Numerical solution of a class of integral equations arising in two-dimensional aerodynamics
NASA Technical Reports Server (NTRS)
Fromme, J.; Golberg, M. A.
1978-01-01
We consider the numerical solution of a class of integral equations arising in the determination of the compressible flow about a thin airfoil in a ventilated wind tunnel. The integral equations are of the first kind with kernels having a Cauchy singularity. Using appropriately chosen Hilbert spaces, it is shown that the kernel gives rise to a mapping which is the sum of a unitary operator and a compact operator. This allows the problem to be studied in terms of an equivalent integral equation of the second kind. A convergent numerical algorithm for its solution is derived by using Galerkin's method. It is shown that this algorithm is numerically equivalent to Bland's collocation method, which is then used as the method of computation. Extensive numerical calculations are presented establishing the validity of the theory.
NASA Astrophysics Data System (ADS)
Imada, Masatoshi; Kashima, Tsuyoshi
2000-09-01
A numerical algorithm for studying strongly correlated electron systems is proposed. The groundstate wavefunction is projected out after a numerical renormalization procedure in the path integral formalism. The wavefunction is expressed from the optimized linear combination of retained states in the truncated Hilbert space with a numerically chosen basis. This algorithm does not suffer from the negative sign problem and can be applied to any type of Hamiltonian in any dimension. The efficiency is tested in examples of the Hubbard model where the basis of Slater determinants is numerically optimized. We show results on fast convergence and accuracy achieved with a small number of retained states.
Kim, Ellen S; Satter, Martin; Reed, Marilyn; Fadell, Ronald; Kardan, Arash
2016-06-01
Glioblastoma multiforme (GBM) is the most common and lethal malignant glioma in adults. Currently, the modality of choice for diagnosing brain tumor is high-resolution magnetic resonance imaging (MRI) with contrast, which provides anatomic detail and localization. Studies have demonstrated, however, that MRI may have limited utility in delineating the full tumor extent precisely. Studies suggest that MR spectroscopy (MRS) can also be used to distinguish high-grade from low-grade gliomas. However, due to operator dependent variables and the heterogeneous nature of gliomas, the potential for error in diagnostic accuracy with MRS is a concern. Positron emission tomography (PET) imaging with (11)C-methionine (MET) and (18)F-fluorodeoxyglucose (FDG) has been shown to add additional information with respect to tumor grade, extent, and prognosis based on the premise of biochemical changes preceding anatomic changes. Combined PET/MRS is a technique that integrates information from PET in guiding the location for the most accurate metabolic characterization of a lesion via MRS. We describe a case of glioblastoma multiforme in which MRS was initially non-diagnostic for malignancy, but when MRS was repeated with PET guidance, demonstrated elevated choline/N-acetylaspartate (Cho/NAA) ratio in the right parietal mass consistent with a high-grade malignancy. Stereotactic biopsy, followed by PET image-guided resection, confirmed the diagnosis of grade IV GBM. To our knowledge, this is the first reported case of an integrated PET/MRS technique for the voxel placement of MRS. Our findings suggest that integrated PET/MRS may potentially improve diagnostic accuracy in high-grade gliomas. PMID:27122050
A comparison of the efficiency of numerical methods for integrating chemical kinetic rate equations
NASA Technical Reports Server (NTRS)
Radhakrishnan, K.
1984-01-01
The efficiency of several algorithms used for numerical integration of stiff ordinary differential equations was compared. The methods examined included two general purpose codes EPISODE and LSODE and three codes (CHEMEQ, CREK1D and GCKP84) developed specifically to integrate chemical kinetic rate equations. The codes were applied to two test problems drawn from combustion kinetics. The comparisons show that LSODE is the fastest code available for the integration of combustion kinetic rate equations. It is shown that an iterative solution of the algebraic energy conservation equation to compute the temperature can be more efficient then evaluating the temperature by integrating its time-derivative.
Controlled time integration for the numerical simulation of meteor radar reflections
NASA Astrophysics Data System (ADS)
Räbinä, Jukka; Mönkölä, Sanna; Rossi, Tuomo; Markkanen, Johannes; Gritsevich, Maria; Muinonen, Karri
2016-07-01
We model meteoroids entering the Earth's atmosphere as objects surrounded by non-magnetized plasma, and consider efficient numerical simulation of radar reflections from meteors in the time domain. Instead of the widely used finite difference time domain method (FDTD), we use more generalized finite differences by applying the discrete exterior calculus (DEC) and non-uniform leapfrog-style time discretization. The computational domain is presented by convex polyhedral elements. The convergence of the time integration is accelerated by the exact controllability method. The numerical experiments show that our code is efficiently parallelized. The DEC approach is compared to the volume integral equation (VIE) method by numerical experiments. The result is that both methods are competitive in modelling non-magnetized plasma scattering. For demonstrating the simulation capabilities of the DEC approach, we present numerical experiments of radar reflections and vary parameters in a wide range.
Orbit determination based on meteor observations using numerical integration of equations of motion
NASA Astrophysics Data System (ADS)
Dmitriev, Vasily; Lupovka, Valery; Gritsevich, Maria
2015-11-01
Recently, there has been a worldwide proliferation of instruments and networks dedicated to observing meteors, including airborne and future space-based monitoring systems . There has been a corresponding rapid rise in high quality data accumulating annually. In this paper, we present a method embodied in the open-source software program "Meteor Toolkit", which can effectively and accurately process these data in an automated mode and discover the pre-impact orbit and possibly the origin or parent body of a meteoroid or asteroid. The required input parameters are the topocentric pre-atmospheric velocity vector and the coordinates of the atmospheric entry point of the meteoroid, i.e. the beginning point of visual path of a meteor, in an Earth centered-Earth fixed coordinate system, the International Terrestrial Reference Frame (ITRF). Our method is based on strict coordinate transformation from the ITRF to an inertial reference frame and on numerical integration of the equations of motion for a perturbed two-body problem. Basic accelerations perturbing a meteoroid's orbit and their influence on the orbital elements are also studied and demonstrated. Our method is then compared with several published studies that utilized variations of a traditional analytical technique, the zenith attraction method, which corrects for the direction of the meteor's trajectory and its apparent velocity due to Earth's gravity. We then demonstrate the proposed technique on new observational data obtained from the Finnish Fireball Network (FFN) as well as on simulated data. In addition, we propose a method of analysis of error propagation, based on general rule of covariance transformation.
Romá, Federico; Cugliandolo, Leticia F; Lozano, Gustavo S
2014-08-01
We introduce a numerical method to integrate the stochastic Landau-Lifshitz-Gilbert equation in spherical coordinates for generic discretization schemes. This method conserves the magnetization modulus and ensures the approach to equilibrium under the expected conditions. We test the algorithm on a benchmark problem: the dynamics of a uniformly magnetized ellipsoid. We investigate the influence of various parameters, and in particular, we analyze the efficiency of the numerical integration, in terms of the number of steps needed to reach a chosen long time with a given accuracy. PMID:25215839
Toyoda, Masayuki; Ozaki, Taisuke
2009-03-28
A numerical method to calculate the four-center electron-repulsion integrals for strictly localized pseudoatomic orbital basis sets has been developed. Compared to the conventional Gaussian expansion method, this method has an advantage in the ease of combination with O(N) density functional calculations. Additional mathematical derivations are also presented including the analytic derivatives of the integrals with respect to atomic positions and spatial damping of the Coulomb interaction due to the screening effect. In the numerical test for a simple molecule, the convergence up to 10(-5) hartree in energy is successfully obtained with a feasible cost of computation. PMID:19334815
Numerical solutions to ill-posed and well-posed impedance boundary condition integral equations
NASA Astrophysics Data System (ADS)
Rogers, J. R.
1983-11-01
Exterior scattering from a three-dimensional impedance body can be formulated in terms of various integral equations derived from the Leontovich impedance boundary condition (IBC). The electric and magnetic field integral equations are ill-posed because they theoretically admit spurious solutions at the frequencies of interior perfect conductor cavity resonances. A combined field formulation is well-posed because it does not allow the spurious solutions. This report outlines the derivation of IBC integral equations and describes a procedure for constructing moment-method solutions for bodies of revolution. Numerical results for scattering from impedance spheres are presented which contrast the stability and accuracy of solutions to the ill-posed equations with those of the well-posed equation. The results show that numerical solutions for exterior scattering to the electric and magnetic field integral equations can be severely contaminated by spurious resonant solutions regardless of whether the surface impedance of the body is lossy or lossless.
Accurate path integral molecular dynamics simulation of ab-initio water at near-zero added cost
NASA Astrophysics Data System (ADS)
Elton, Daniel; Fritz, Michelle; Soler, José; Fernandez-Serra, Marivi
It is now established that nuclear quantum motion plays an important role in determining water's structure and dynamics. These effects are important to consider when evaluating DFT functionals and attempting to develop better ones for water. The standard way of treating nuclear quantum effects, path integral molecular dynamics (PIMD), multiplies the number of energy/force calculations by the number of beads, which is typically 32. Here we introduce a method whereby PIMD can be incorporated into a DFT molecular dynamics simulation at virtually zero cost. The method is based on the cluster (many body) expansion of the energy. We first subtract the DFT monomer energies, using a custom DFT-based monomer potential energy surface. The evolution of the PIMD beads is then performed using only the more-accurate Partridge-Schwenke monomer energy surface. The DFT calculations are done using the centroid positions. Various bead thermostats can be employed to speed up the sampling of the quantum ensemble. The method bears some resemblance to multiple timestep algorithms and other schemes used to speed up PIMD with classical force fields. We show that our method correctly captures some of key effects of nuclear quantum motion on both the structure and dynamics of water. We acknowledge support from DOE Award No. DE-FG02-09ER16052 (D.E.) and DOE Early Career Award No. DE-SC0003871 (M.V.F.S.).
Peskin, Michael E
2003-02-13
In upper-division undergraduate physics courses, it is desirable to give numerical problem-solving exercises integrated naturally into weekly problem sets. I explain a method for doing this that makes use of the built-in class structure of the Java programming language. I also supply a Java class library that can assist instructors in writing programs of this type.
NUMERICAL APPROXIMATION OF SEMI-INTEGRALS AND SEMIDERIVATIVES BY PRODUCT QUADRATURE RULES
This paper is concerned with the numerical calculation of the semi-integral and semiderivative of a function f, whose values f (xj) are known on a discrete set of abscissas 0 = x(1) < x(2) < ... < x(n). A family of product quadrature rules is developed to approximate the semi-int...
Some numerical methods for integrating systems of first-order ordinary differential equations
NASA Technical Reports Server (NTRS)
Clark, N. W.
1969-01-01
Report on numerical methods of integration includes the extrapolation methods of Bulirsch-Stoer and Neville. A comparison is made nith the Runge-Kutta and Adams-Moulton methods, and circumstances are discussed under which the extrapolation method may be preferred.
On the stability of numerical integration routines for ordinary differential equations.
NASA Technical Reports Server (NTRS)
Glover, K.; Willems, J. C.
1973-01-01
Numerical integration methods for the solution of initial value problems for ordinary vector differential equations may be modelled as discrete time feedback systems. The stability criteria discovered in modern control theory are applied to these systems and criteria involving the routine, the step size and the differential equation are derived. Linear multistep, Runge-Kutta, and predictor-corrector methods are all investigated.
NASA Astrophysics Data System (ADS)
Zhang, Na; Yao, Jun; Huang, Zhaoqin; Wang, Yueying
2013-06-01
Numerical simulation in naturally fractured media is challenging because of the coexistence of porous media and fractures on multiple scales that need to be coupled. We present a new approach to reservoir simulation that gives accurate resolution of both large-scale and fine-scale flow patterns. Multiscale methods are suitable for this type of modeling, because it enables capturing the large scale behavior of the solution without solving all the small features. Dual-porosity models in view of their strength and simplicity can be mainly used for sugar-cube representation of fractured media. In such a representation, the transfer function between the fracture and the matrix block can be readily calculated for water-wet media. For a mixed-wet system, the evaluation of the transfer function becomes complicated due to the effect of gravity. In this work, we use a multiscale finite element method (MsFEM) for two-phase flow in fractured media using the discrete-fracture model. By combining MsFEM with the discrete-fracture model, we aim towards a numerical scheme that facilitates fractured reservoir simulation without upscaling. MsFEM uses a standard Darcy model to approximate the pressure and saturation on a coarse grid, whereas fine scale effects are captured through basis functions constructed by solving local flow problems using the discrete-fracture model. The accuracy and the robustness of MsFEM are shown through several examples. In the first example, we consider several small fractures in a matrix and then compare the results solved by the finite element method. Then, we use the MsFEM in more complex models. The results indicate that the MsFEM is a promising path toward direct simulation of highly resolution geomodels.
Safouhi, Hassan . E-mail: hassan.safouhi@ualberta.ca; Berlu, Lilian
2006-07-20
Molecular overlap-like quantum similarity measurements imply the evaluation of overlap integrals of two molecular electronic densities related by Dirac delta function. When the electronic densities are expanded over atomic orbitals using the usual LCAO-MO approach (linear combination of atomic orbitals), overlap-like quantum similarity integrals could be expressed in terms of four-center overlap integrals. It is shown that by introducing the Fourier transform of delta Dirac function in the integrals and using the Fourier transform approach combined with the so-called B functions, one can obtain analytic expressions of the integrals under consideration. These analytic expressions involve highly oscillatory semi-infinite spherical Bessel functions, which are the principal source of severe numerical and computational difficulties. In this work, we present a highly efficient algorithm for a fast and accurate numerical evaluation of these multicenter overlap-like quantum similarity integrals over Slater type functions. This algorithm is based on the SD-bar approach due to Safouhi. Recurrence formulae are used for a better control of the degree of accuracy and for a better stability of the algorithm. The numerical result section shows the efficiency of our algorithm, compared with the alternatives using the one-center two-range expansion method, which led to very complicated analytic expressions, the epsilon algorithm and the nonlinear D-bar transformation.
NASA Astrophysics Data System (ADS)
Evans, W. A. B.; Torre, A.
2012-11-01
The paper focusses on the advantages of using high-order Gauss-Legendre quadratures for the precise evaluation of integrals with both smooth and rapidly changing integrands. Aspects of their precision are analysed in the light of Gauss' error formula. Some "test examples" are considered and evaluated in multiple precision to ≈ 200 significant decimal digits with David Bailey's multiprecision package to eliminate truncation/rounding errors. The increase of precision on doubling the number of subintervals is analysed, the relevant quadrature attribute being the precision increment. In order to exemplify the advantages that high-order quadrature afford, the technique is then used to evaluate several plots of the Rayleigh-Sommerfeld diffraction integral for axi-symmetric source fields defined on a planar aperture. A comparison of the high-order quadrature method against various FFT-based methods is finally given.
High-performance Integrated numerical methods for Two-phase Flow in Heterogeneous Porous Media
NASA Astrophysics Data System (ADS)
Chueh, Chih-Che; Djilali, Ned; Bangerth, Wolfgang
2010-11-01
Modelling of two-phase flow in heterogeneous porous media has been playing a decisive role in a variety of areas. However, how to efficiently and accurately solve the governing equation in the flow in porous media remains a challenge. In order to ensure the accurate representative flow field and simultaneously increase the computational efficiency, we incorporate a number of state-of-the-art techniques into a numerical framework on which more complicated models in the field of multi-phase flow in porous media will be based. Such a numerical framework consists of a h-adaptive refinement method, an entropy-based artificial diffusive term, a new adaptive operator splitting method and efficient preconditioners. In particular, it is emphasized that we propose a new efficient adaptive operator splitting to avoid solving a time-consuming pressure-velocity part every saturation time step and, most importantly, we also provide a theoretically numerical analysis as well as proof. A few benchmarks will be demonstrated in the presentation.
NASA Technical Reports Server (NTRS)
Rosenbaum, J. S.
1976-01-01
If a system of ordinary differential equations represents a property conserving system that can be expressed linearly (e.g., conservation of mass), it is then desirable that the numerical integration method used conserve the same quantity. It is shown that both linear multistep methods and Runge-Kutta methods are 'conservative' and that Newton-type methods used to solve the implicit equations preserve the inherent conservation of the numerical method. It is further shown that a method used by several authors is not conservative.
NASA Astrophysics Data System (ADS)
Tirupathi, S.; Schiemenz, A. R.; Liang, Y.; Parmentier, E.; Hesthaven, J.
2013-12-01
The style and mode of melt migration in the mantle are important to the interpretation of basalts erupted on the surface. Both grain-scale diffuse porous flow and channelized melt migration have been proposed. To better understand the mechanisms and consequences of melt migration in a heterogeneous mantle, we have undertaken a numerical study of reactive dissolution in an upwelling and viscously deformable mantle where solubility of pyroxene increases upwards. Our setup is similar to that described in [1], except we use a larger domain size in 2D and 3D and a new numerical method. To enable efficient simulations in 3D through parallel computing, we developed a high-order accurate numerical method for the magma dynamics problem using discontinuous Galerkin methods and constructed the problem using the numerical library deal.II [2]. Linear stability analyses of the reactive dissolution problem reveal three dynamically distinct regimes [3] and the simulations reported in this study were run in the stable regime and the unstable wave regime where small perturbations in porosity grows periodically. The wave regime is more relevant to melt migration beneath the mid-ocean ridges but computationally more challenging. Extending the 2D simulations in the stable regime in [1] to 3D using various combinations of sustained perturbations in porosity at the base of the upwelling column (which may result from a viened mantle), we show the geometry and distribution of dunite channel and high-porosity melt channels are highly correlated with inflow perturbation through superposition. Strong nonlinear interactions among compaction, dissolution, and upwelling give rise to porosity waves and high-porosity melt channels in the wave regime. These compaction-dissolution waves have well organized but time-dependent structures in the lower part of the simulation domain. High-porosity melt channels nucleate along nodal lines of the porosity waves, growing downwards. The wavelength scales
A comparison of the efficiency of numerical methods for integrating chemical kinetic rate equations
NASA Technical Reports Server (NTRS)
Radhakrishnan, K.
1984-01-01
A comparison of the efficiency of several algorithms recently developed for the efficient numerical integration of stiff ordinary differential equations is presented. The methods examined include two general-purpose codes EPISODE and LSODE and three codes (CHEMEQ, CREK1D, and GCKP84) developed specifically to integrate chemical kinetic rate equations. The codes are applied to two test problems drawn from combustion kinetics. The comparisons show that LSODE is the fastest code currently available for the integration of combustion kinetic rate equations. An important finding is that an iterative solution of the algebraic energy conservation equation to compute the temperature can be more efficient than evaluating the temperature by integrating its time-derivative.
Comparison of symbolic and numerical integration methods for an assumed-stress hybrid shell element
NASA Technical Reports Server (NTRS)
Rengarajan, Govind; Knight, Norman F., Jr.; Aminpour, Mohammad A.
1993-01-01
Hybrid shell elements have long been regarded with reserve by the commercial finite element developers despite the high degree of reliability and accuracy associated with such formulations. The fundamental reason is the inherent higher computational cost of the hybrid approach as compared to the displacement-based formulations. However, a noteworthy factor in favor of hybrid elements is that numerical integration to generate element matrices can be entirely avoided by the use of symbolic integration. In this paper, the use of the symbolic computational approach is presented for an assumed-stress hybrid shell element with drilling degrees of freedom and the significant time savings achieved is demonstrated through an example.
NLOS UV channel modeling using numerical integration and an approximate closed-form path loss model
NASA Astrophysics Data System (ADS)
Gupta, Ankit; Noshad, Mohammad; Brandt-Pearce, Maïté
2012-10-01
In this paper we propose a simulation method using numerical integration, and develop a closed-form link loss model for physical layer channel characterization for non-line of sight (NLOS) ultraviolet (UV) communication systems. The impulse response of the channel is calculated by assuming both uniform and Gaussian profiles for transmitted beams and different geometries. The results are compared with previously published results. The accuracy of the integration approach is compared to the Monte Carlo simulation. Then the path loss using the simulation method and the suggested closed-form expression are presented for different link geometries. The accuracies are evaluated and compared to the results obtained using other methods.
Numerical evaluation of two-center integrals over Slater type orbitals
NASA Astrophysics Data System (ADS)
Kurt, S. A.; Yükçü, N.
2016-03-01
Slater Type Orbitals (STOs) which one of the types of exponential type orbitals (ETOs) are used usually as basis functions in the multicenter molecular integrals to better understand physical and chemical properties of matter. In this work, we develop algorithms for two-center overlap and two-center two-electron hybrid and Coulomb integrals which are calculated with help of translation method for STOs and some auxiliary functions by V. Magnasco's group. We use Mathematica programming language to produce algorithms for these calculations. Numerical results for some quantum numbers are presented in the tables. Consequently, we compare our obtained numerical results with the other known literature results and other details of evaluation method are discussed.
Extended RKN-type methods for numerical integration of perturbed oscillators
NASA Astrophysics Data System (ADS)
Yang, Hongli; Wu, Xinyuan; You, Xiong; Fang, Yonglei
2009-10-01
In this paper, extended Runge-Kutta-Nyström-type methods for the numerical integration of perturbed oscillators with low frequencies are presented, which inherit the framework of RKN methods and make full use of the special feature of the true flows for both the internal stages and the updates. Following the approach of J. Butcher, E. Hairer and G. Wanner, we develop a new kind of tree set to derive order conditions for the extended Runge-Kutta-Nyström-type methods. The numerical stability and phase properties of the new methods are analyzed. Numerical experiments are accompanied to show the applicability and efficiency of our new methods in comparison with some well-known high quality methods proposed in the scientific literature.
Two step hybrid methods of 7th and 8th order for the numerical integration of second order IVPs
NASA Astrophysics Data System (ADS)
Kalogiratou, Z.; Monovasilis, Th.; Simos, T. E.
2016-06-01
In this work we consider the numerical integration of second order ODEs where the first derivative is missing. We construct two step hybrid methods with six and seven stages and seventh and eighth algebraic order. We apply the new methods on the numerical integration of several test problems.
Some remarks on the numerical computation of integrals on an unbounded interval
NASA Astrophysics Data System (ADS)
Capobianco, M.; Criscuolo, G.
2007-08-01
An account of the error and the convergence theory is given for Gauss?Laguerre and Gauss?Radau?Laguerre quadrature formulae. We develop also truncated models of the original Gauss rules to compute integrals extended over the positive real axis. Numerical examples confirming the theoretical results are given comparing these rules among themselves and with different quadrature formulae proposed by other authors (Evans, Int. J. Comput. Math. 82:721?730, 2005; Gautschi, BIT 31:438?446, 1991).
The Use of Phase-Lag Derivatives in the Numerical Integration of ODEs with Oscillating Solutions
Anastassi, Z. A.; Vlachos, D. S.; Simos, T. E.
2008-09-01
In this paper we consider the fitting of the coefficients of a numerical method, not only due to the nullification of the phase-lag, but also to its derivatives. We show that the method gains efficiency with each derivative of the phase-lag nullified for various problems with oscillating solutions. The analysis of the local truncation error analysis and the stability of the methods show the importance of zero phase-lag derivatives when integrating oscillatory differential equations.
Analytical Solutions Using Integral Formulations and Their Coupling with Numerical Approaches.
Morel-Seytoux, Hubert J
2015-01-01
Analytical and numerical approaches have their own distinct domains of merit and application. Unfortunately there has been a tendency to use either one or the other even when their domains overlap. Yet there is definite advantage in combining the two approaches. Being relatively new this emerging technique of combining the approaches is, at this stage, more of an art than a science. In this article we suggest approaches for the combination through simple examples. We also suggest that the integral formulation of the analytical problems may have some advantages over the differential formulation. The differential formulation limits somewhat the range of linear system descriptions that can be applied to a variety of practical problems. On the other hand the integral approach tends to focus attention to overall integrated behavior and properties of the system rather than on minute details. This is particularly useful in the coupling with a numerical model as in practice it generally deals also with only the integrated behavior of the system. The thesis of this article is illustrated with some simple stream-aquifer flow exchange examples. PMID:25213772
DE 102 - A numerically integrated ephemeris of the moon and planets spanning forty-four centuries
NASA Technical Reports Server (NTRS)
Newhall, X. X.; Standish, E. M.; Willams, J. G.
1983-01-01
It is pointed out that the 1960's were the turning point for the generation of lunar and planetary ephemerides. All previous measurements of the positions of solar system bodies were optical angular measurements. New technological improvements leading to immense changes in observational accuracy are related to developments concerning radar, Viking landers on Mars, and laser ranges to lunar corner cube retroreflectors. Suitable numerical integration techniques and more comprehensive physical models were developed to match the accuracy of the modern data types. The present investigation is concerned with the first integrated ephemeris, DE 102, which covers the entire span of the historical astronomical observations of usable accuracy which are known. The fit is made to modern data. The integration spans the time period from 1411 BC to 3002 AD.
NASA Technical Reports Server (NTRS)
Pratt, D. T.; Radhakrishnan, K.
1986-01-01
The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.
NASA Astrophysics Data System (ADS)
Macomber, B.; Woollands, R. M.; Probe, A.; Younes, A.; Bai, X.; Junkins, J.
2013-09-01
Modified Chebyshev Picard Iteration (MCPI) is an iterative numerical method for approximating solutions of linear or non-linear Ordinary Differential Equations (ODEs) to obtain time histories of system state trajectories. Unlike other step-by-step differential equation solvers, the Runge-Kutta family of numerical integrators for example, MCPI approximates long arcs of the state trajectory with an iterative path approximation approach, and is ideally suited to parallel computation. Orthogonal Chebyshev Polynomials are used as basis functions during each path iteration; the integrations of the Picard iteration are then done analytically. Due to the orthogonality of the Chebyshev basis functions, the least square approximations are computed without matrix inversion; the coefficients are computed robustly from discrete inner products. As a consequence of discrete sampling and weighting adopted for the inner product definition, Runge phenomena errors are minimized near the ends of the approximation intervals. The MCPI algorithm utilizes a vector-matrix framework for computational efficiency. Additionally, all Chebyshev coefficients and integrand function evaluations are independent, meaning they can be simultaneously computed in parallel for further decreased computational cost. Over an order of magnitude speedup from traditional methods is achieved in serial processing, and an additional order of magnitude is achievable in parallel architectures. This paper presents a new MCPI library, a modular toolset designed to allow MCPI to be easily applied to a wide variety of ODE systems. Library users will not have to concern themselves with the underlying mathematics behind the MCPI method. Inputs are the boundary conditions of the dynamical system, the integrand function governing system behavior, and the desired time interval of integration, and the output is a time history of the system states over the interval of interest. Examples from the field of astrodynamics are
NASA Astrophysics Data System (ADS)
Min, Xiaoyi
This thesis first presents the study of the interaction of electromagnetic waves with three-dimensional heterogeneous, dielectric, magnetic, and lossy bodies by surface integral equation modeling. Based on the equivalence principle, a set of coupled surface integral equations is formulated and then solved numerically by the method of moments. Triangular elements are used to model the interfaces of the heterogeneous body, and vector basis functions are defined to expand the unknown current in the formulation. The validity of this formulation is verified by applying it to concentric spheres for which an exact solution exists. The potential applications of this formulation to a partially coated sphere and a homogeneous human body are discussed. Next, this thesis also introduces an efficient new set of integral equations for treating the scattering problem of a perfectly conducting body coated with a thin magnetically lossy layer. These electric field integral equations and magnetic field integral equations are numerically solved by the method of moments (MoM). To validate the derived integral equations, an alternative method to solve the scattering problem of an infinite circular cylinder coated with a thin magnetic lossy layer has also been developed, based on the eigenmode expansion. Results for the radar cross section and current densities via the MoM and the eigenmode expansion method are compared. The agreement is excellent. The finite difference time domain method is subsequently implemented to solve a metallic object coated with a magnetic thin layer and numerical results are compared with that by the MoM. Finally, this thesis presents an application of the finite-difference time-domain approach to the problem of electromagnetic receiving and scattering by a cavity -backed antenna situated on an infinite conducting plane. This application involves modifications of Yee's model, which applies the difference approximations of field derivatives to differential
CALL FOR PAPERS: Special Issue on `Geometric Numerical Integration of Differential Equations'
NASA Astrophysics Data System (ADS)
Quispel, G. R. W.; McLachlan, R. I.
2005-02-01
This is a call for contributions to a special issue of Journal of Physics A: Mathematical and General entitled `Geometric Numerical Integration of Differential Equations'. This issue should be a repository for high quality original work. We are interested in having the topic interpreted broadly, that is, to include contributions dealing with symplectic or multisymplectic integration; volume-preserving integration; symmetry-preserving integration; integrators that preserve first integrals, Lyapunov functions, or dissipation; exponential integrators; integrators for highly oscillatory systems; Lie-group integrators, etc. Papers on geometric integration of both ODEs and PDEs will be considered, as well as application to molecular-scale integration, celestial mechanics, particle accelerators, fluid flows, population models, epidemiological models and/or any other areas of science. We believe that this issue is timely, and hope that it will stimulate further development of this new and exciting field. The Editorial Board has invited G R W Quispel and R I McLachlan to serve as Guest Editors for the special issue. Their criteria for acceptance of contributions are the following: • The subject of the paper should relate to geometric numerical integration in the sense described above. • Contributions will be refereed and processed according to the usual procedure of the journal. • Papers should be original; reviews of a work published elsewhere will not be accepted. The guidelines for the preparation of contributions are as follows: • The DEADLINE for submission of contributions is 1 September 2005. This deadline will allow the special issue to appear in late 2005 or early 2006. • There is a strict page limit of 16 printed pages (approximately 9600 words) per contribution. For papers exceeding this limit, the Guest Editors reserve the right to request a reduction in length. Further advice on publishing your work in Journal of Physics A: Mathematical and General
Sensitivity of inelastic response to numerical integration of strain energy. [for cantilever beam
NASA Technical Reports Server (NTRS)
Kamat, M. P.
1976-01-01
The exact solution to the quasi-static, inelastic response of a cantilever beam of rectangular cross section subjected to a bending moment at the tip is obtained. The material of the beam is assumed to be linearly elastic-linearly strain-hardening. This solution is then compared with three different numerical solutions of the same problem obtained by minimizing the total potential energy using Gaussian quadratures of two different orders and a Newton-Cotes scheme for integrating the strain energy of deformation. Significant differences between the exact dissipative strain energy and its numerical counterpart are emphasized. The consequence of this on the nonlinear transient responses of a beam with solid cross section and that of a thin-walled beam on elastic supports under impulsive loads are examined.
NASA Astrophysics Data System (ADS)
Rein, Hanno; Spiegel, David S.
2015-01-01
We present IAS15, a 15th-order integrator to simulate gravitational dynamics. The integrator is based on a Gauß-Radau quadrature and can handle conservative as well as non-conservative forces. We develop a step-size control that can automatically choose an optimal timestep. The algorithm can handle close encounters and high-eccentricity orbits. The systematic errors are kept well below machine precision, and long-term orbit integrations over 109 orbits show that IAS15 is optimal in the sense that it follows Brouwer's law, i.e. the energy error behaves like a random walk. Our tests show that IAS15 is superior to a mixed-variable symplectic integrator and other popular integrators, including high-order ones, in both speed and accuracy. In fact, IAS15 preserves the symplecticity of Hamiltonian systems better than the commonly used nominally symplectic integrators to which we compared it. We provide an open-source implementation of IAS15. The package comes with several easy-to-extend examples involving resonant planetary systems, Kozai-Lidov cycles, close encounters, radiation pressure, quadrupole moment and generic damping functions that can, among other things, be used to simulate planet-disc interactions. Other non-conservative forces can be added easily.
Numerical and analytical tests of quasi-integrability in modified sine-Gordon models
NASA Astrophysics Data System (ADS)
Ferreira, L. A.; Zakrzewski, Wojtek J.
2014-01-01
Following our attempts to define quasi-integrability in which we related this concept to a particular symmetry of the two-soliton function we check this condition in three classes of modified sine-Gordon models in (1 + 1) dimensions. We find that the numerical results seen in various scatterings of two solitons and in the time evolution of breather-like structures support our ideas about the symmetry of the field configurations and its effects on the anomalies of the conservation laws of the charges.
Time transformations and Cowell's method. [for numerical integration of satellite motion equations
NASA Technical Reports Server (NTRS)
Velez, C. E.; Hilinski, S.
1978-01-01
The precise numerical integration of Cowell's equations of satellite motion is frequently performed with an independent variable s defined by an equation of the form dt = cr to the n-th power ds, where t represents time, r the radial distance from the center of attraction, c is a constant, and n is a parameter. This has been primarily motivated by the 'uniformizing' effects of such a transformation resulting in desirable 'analytic' stepsize control for elliptical orbits. This report discusses the 'proper' choice of the parameter n defining the independent variable s for various types of orbits and perturbation models, and develops a criterion for its selection.
NASA Technical Reports Server (NTRS)
Majumdar, Alok K.; LeClair, Andre C.; Hedayat, Ali
2016-01-01
This paper presents a numerical model of pressurization of a cryogenic propellant tank for the Integrated Vehicle Fluid (IVF) system using the Generalized Fluid System Simulation Program (GFSSP). The IVF propulsion system, being developed by United Launch Alliance, uses boiloff propellants to drive thrusters for the reaction control system as well as to run internal combustion engines to develop power and drive compressors to pressurize propellant tanks. NASA Marshall Space Flight Center (MSFC) has been running tests to verify the functioning of the IVF system using a flight tank. GFSSP, a finite volume based flow network analysis software developed at MSFC, has been used to develop an integrated model of the tank and the pressurization system. This paper presents an iterative algorithm for converging the interface boundary conditions between different component models of a large system model. The model results have been compared with test data.
Three-dimensional numerical modeling of photonic integration with dielectric-loaded SPP waveguides
NASA Astrophysics Data System (ADS)
Krasavin, A. V.; Zayats, A. V.
2008-07-01
Using full three-dimensional numerical modeling, we demonstrate highly efficient passive and active photonic circuit elements based on dielectric-loaded surface plasmon polariton waveguides (DLSPPWs). Highly confined surface plasmon polariton (SPP) mode having subwavelength cross section allows high level of integration of DLSPPW circuitry. We demonstrate very efficient guiding and routing of SPP signals with the passive waveguide elements such as bends, splitters, and Bragg reflectors, having a functional size of just a few microns at telecommunication wavelengths. Introducing a gain in the dielectric, we have found the requirement for lossless waveguiding and estimated the performance of DLSPPW lossless and active elements. DLSPPW based components have prospective implementation in photonic integrated chips, hybrid optical-electronic circuits, and lab-on-a-chip applications.
NASA Technical Reports Server (NTRS)
Radhakrishnan, K.
1984-01-01
The efficiency and accuracy of several algorithms recently developed for the efficient numerical integration of stiff ordinary differential equations are compared. The methods examined include two general-purpose codes, EPISODE and LSODE, and three codes (CHEMEQ, CREK1D, and GCKP84) developed specifically to integrate chemical kinetic rate equations. The codes are applied to two test problems drawn from combustion kinetics. The comparisons show that LSODE is the fastest code currently available for the integration of combustion kinetic rate equations. An important finding is that an interactive solution of the algebraic energy conservation equation to compute the temperature does not result in significant errors. In addition, this method is more efficient than evaluating the temperature by integrating its time derivative. Significant reductions in computational work are realized by updating the rate constants (k = at(supra N) N exp(-E/RT) only when the temperature change exceeds an amount delta T that is problem dependent. An approximate expression for the automatic evaluation of delta T is derived and is shown to result in increased efficiency.
NASA Astrophysics Data System (ADS)
Banaś, Krzysztof; Krużel, Filip; Bielański, Jan
2016-06-01
The paper presents investigations on the implementation and performance of the finite element numerical integration algorithm for first order approximations and three processor architectures, popular in scientific computing, classical CPU, Intel Xeon Phi and NVIDIA Kepler GPU. A unifying programming model and portable OpenCL implementation is considered for all architectures. Variations of the algorithm due to different problems solved and different element types are investigated and several optimizations aimed at proper optimization and mapping of the algorithm to computer architectures are demonstrated. Performance models of execution are developed for different processors and tested in practical experiments. The results show the varying levels of performance for different architectures, but indicate that the algorithm can be effectively ported to all of them. The general conclusion is that the finite element numerical integration can achieve sufficient performance on different multi- and many-core architectures and should not become a performance bottleneck for finite element simulation codes. Specific observations lead to practical advises on how to optimize the kernels and what performance can be expected for the tested architectures.
NASA Astrophysics Data System (ADS)
Ding, Ye; Zhu, Limin; Zhang, Xiaojian; Ding, Han
2012-09-01
As one of the bases of gradient-based optimization algorithms, sensitivity analysis is usually required to calculate the derivatives of the system response with respect to the machining parameters. The most widely used approaches for sensitivity analysis are based on time-consuming numerical methods, such as finite difference methods. This paper presents a semi-analytical method for calculation of the sensitivity of the stability boundary in milling. After transforming the delay-differential equation with time-periodic coefficients governing the dynamic milling process into the integral form, the Floquet transition matrix is constructed by using the numerical integration method. Then, the analytical expressions of derivatives of the Floquet transition matrix with respect to the machining parameters are obtained. Thereafter, the classical analytical expression of the sensitivity of matrix eigenvalues is employed to calculate the sensitivity of the stability lobe diagram. The two-degree-of-freedom milling example illustrates the accuracy and efficiency of the proposed method. Compared with the existing methods, the unique merit of the proposed method is that it can be used for analytically computing the sensitivity of the stability boundary in milling, without employing any finite difference methods. Therefore, the high accuracy and high efficiency are both achieved. The proposed method can serve as an effective tool for machining parameter optimization and uncertainty analysis in high-speed milling.
Application of Numerical Integration and Data Fusion in Unit Vector Method
NASA Astrophysics Data System (ADS)
Zhang, J.
2012-01-01
The Unit Vector Method (UVM) is a series of orbit determination methods which are designed by Purple Mountain Observatory (PMO) and have been applied extensively. It gets the conditional equations for different kinds of data by projecting the basic equation to different unit vectors, and it suits for weighted process for different kinds of data. The high-precision data can play a major role in orbit determination, and accuracy of orbit determination is improved obviously. The improved UVM (PUVM2) promoted the UVM from initial orbit determination to orbit improvement, and unified the initial orbit determination and orbit improvement dynamically. The precision and efficiency are improved further. In this thesis, further research work has been done based on the UVM: Firstly, for the improvement of methods and techniques for observation, the types and decision of the observational data are improved substantially, it is also asked to improve the decision of orbit determination. The analytical perturbation can not meet the requirement. So, the numerical integration for calculating the perturbation has been introduced into the UVM. The accuracy of dynamical model suits for the accuracy of the real data, and the condition equations of UVM are modified accordingly. The accuracy of orbit determination is improved further. Secondly, data fusion method has been introduced into the UVM. The convergence mechanism and the defect of weighted strategy have been made clear in original UVM. The problem has been solved in this method, the calculation of approximate state transition matrix is simplified and the weighted strategy has been improved for the data with different dimension and different precision. Results of orbit determination of simulation and real data show that the work of this thesis is effective: (1) After the numerical integration has been introduced into the UVM, the accuracy of orbit determination is improved obviously, and it suits for the high-accuracy data of
Integrated Numerical Experiments (INEX) and the Free-Electron Laser Physical Process Code (FELPPC)
Thode, L.E.; Chan, K.C.D.; Schmitt, M.J.; McKee, J.; Ostic, J.; Elliott, C.J.; McVey, B.D.
1990-01-01
The strong coupling of subsystem elements, such as the accelerator, wiggler, and optics, greatly complicates the understanding and design of a free electron laser (FEL), even at the conceptual level. Given the requirements for high-performance FELs, the strong coupling between the laser subsystems must be included to obtain a realistic picture of the potential operational capability. To address the strong coupling character of the FEL the concept of an Integrated Numerical Experiment (INEX) was proposed. Unique features of the INEX approach are consistency and numerical equivalence of experimental diagnostics. The equivalent numerical diagnostics mitigates the major problem of misinterpretation that often occurs when theoretical and experimental data are compared. The INEX approach has been applied to a large number of accelerator and FEL experiments. Overall, the agreement between INEX and the experiments is very good. Despite the success of INEX, the approach is difficult to apply to trade-off and initial design studies because of the significant manpower and computational requirements. On the other hand, INEX provides a base from which realistic accelerator, wiggler, and optics models can be developed. The Free Electron Laser Physical Process Code (FELPPC) includes models developed from INEX, provides coupling between the subsystems models and incorporates application models relevant to a specific trade-off or design study.
Integrated Numerical Experiments (INEX) and the Free-Electron Laser Physical Process Code (FELPPC)
NASA Astrophysics Data System (ADS)
Thode, L. E.; Chan, K. C. D.; Schmitt, M. J.; McKee, J.; Ostic, J.; Elliott, C. J.; McVey, B. D.
The strong coupling of subsystem elements, such as the accelerator, wiggler, and optics, greatly complicates the understanding and design of a free electron laser (FEL), even at the conceptual level. Given the requirements for high-performance FELs, the strong coupling between the laser subsystems must be included to obtain a realistic picture of the potential operational capability. To address the strong coupling character of the FEL the concept of an Integrated Numerical Experiment (INEX) was proposed. Unique features of the INEX approach are consistency and numerical equivalence of experimental diagnostics. The equivalent numerical diagnostics mitigates the major problem of misinterpretation that often occurs when theoretical and experimental data are compared. The INEX approach has been applied to a large number of accelerator and FEL experiments. Overall, the agreement between INEX and the experiments is very good. Despite the success of INEX, the approach is difficult to apply to trade-off and initial design studies because of the significant manpower and computational requirements. On the other hand, INEX provides a base from which realistic accelerator, wiggler, and optics models can be developed. The Free Electron Laser Physical Process Code (FELPPC) includes models developed from INEX, provides coupling between the subsystems models and incorporates application models relevant to a specific trade-off or design study.
Ianakiev, A.; Esat, I.I.
1995-09-01
Numerical solution of dynamical systems with widely varying motion characteristics, such as relatively slow motion coupled with high frequency as it would be in flexible mechanisms, likely to pose problems. In this paper the mathematical model of a flexible mechanism is solved by using a mixed integration method that attempts to deal with the complexity of the coupled differential equations of the rigid-body and elastic motion. The mixed integration method consists of two integration methods (Rossenbrock-Wanner and Hilber-Hughes-Taylor methods) that have been combined in order to minimize the computational complexity required for the approximation of the real system. The a Hilber-Hughes-Taylor methods incorporates numerical damping that selectively affects only the higher modes of vibration. The improvement of the stability and the accuracy of the solution due to the numerical damping has been demonstrated via a numerical example that represents a stiff system. The example system was selected to contain a physically important low frequency and spurious highly frequency oscillations. The solution method filtered the high numerical oscillations from the response results. The Rossenbrock-Wanner integration technique was also presented. In this case it is also shown that fine adjustment of integration parameters could effect the degree of numerical damping. A mixed integration method, combination of the two found to give the best performance and accuracy in the case of stiff problems.
NASA Technical Reports Server (NTRS)
Yaros, S. F.; Carlson, J. R.; Chandrasekaran, B.
1986-01-01
An effort has been undertaken at the NASA Langley Research Center to assess the capabilities of available computational methods for use in propulsion integration design studies of transonic transport aircraft, particularly of pylon/nacelle combinations which exhibit essentially no interference drag. The three computer codes selected represent state-of-the-art computational methods for analyzing complex configurations at subsonic and transonic flight conditions. These are: EULER, a finitie volume solution of the Euler equation; VSAERO, a panel solution of the Laplace equation; and PPW, a finite difference solution of the small disturbance transonic equations. In general, all three codes have certain capabilities that allow them to be of some value in predicting the flows about transport configurations, but all have limitations. Until more accurate methods are available, careful application and interpretation of the results of these codes are needed.
NASA Astrophysics Data System (ADS)
Bhattacharya, Amitabh
2013-11-01
An efficient algorithm for simulating Stokes flow around particles is presented here, in which a second order Finite Difference method (FDM) is coupled to a Boundary Integral method (BIM). This method utilizes the strong points of FDM (i.e. localized stencil) and BIM (i.e. accurate representation of particle surface). Specifically, in each iteration, the flow field away from the particles is solved on a Cartesian FDM grid, while the traction on the particle surface (given the the velocity of the particle) is solved using BIM. The two schemes are coupled by matching the solution in an intermediate region between the particle and surrounding fluid. We validate this method by solving for flow around an array of cylinders, and find good agreement with Hasimoto's (J. Fluid Mech. 1959) analytical results.
Numerical analysis of composite STEEL-CONCRETE SECTIONS using integral equation of Volterra
NASA Astrophysics Data System (ADS)
Partov, Doncho; Kantchev, Vesselin
2011-09-01
The paper presents analysis of the stress and deflections changes due to creep in statically determinate composite steel-concrete beam. The mathematical model involves the equation of equilibrium, compatibility and constitutive relationship, i.e. an elastic law for the steel part and an integral-type creep law of Boltzmann — Volterra for the concrete part. On the basis of the theory of the viscoelastic body of Arutyunian-Trost-Bažant for determining the redistribution of stresses in beam section between concrete plate and steel beam with respect to time "t", two independent Volterra integral equations of the second kind have been derived. Numerical method based on linear approximation of the singular kernal function in the integral equation is presented. Example with the model proposed is investigated. The creep functions is suggested by the model CEB MC90-99 and the "ACI 209R-92 model. The elastic modulus of concrete Ec(t) is assumed to be constant in time `t'. The obtained results from the both models are compared.
Numerical analysis of composite STEEL-CONCRETE SECTIONS using integral equation of Volterra
NASA Astrophysics Data System (ADS)
Partov, Doncho; Kantchev, Vesselin
2011-09-01
The paper presents analysis of the stress and deflections changes due to creep in statically determinate composite steel-concrete beam. The mathematical model involves the equation of equilibrium, compatibility and constitutive relationship, i.e. an elastic law for the steel part and an integral-type creep law of Boltzmann — Volterra for the concrete part. On the basis of the theory of the viscoelastic body of Arutyunian-Trost-Bažant for determining the redistribution of stresses in beam section between concrete plate and steel beam with respect to time "t", two independent Volterra integral equations of the second kind have been derived. Numerical method based on linear approximation of the singular kernal function in the integral equation is presented. Example with the model proposed is investigated. The creep functions is suggested by the model CEB MC90-99 and the "ACI 209R-92 model. The elastic modulus of concrete E c (t) is assumed to be constant in time `t'. The obtained results from the both models are compared.
NASA Astrophysics Data System (ADS)
Ferreira, N.; Krah, T.; Jeong, D. C.; Metz, D.; Kniel, K.; Dietzel, A.; Büttgenbach, S.; Härtig, F.
2014-06-01
The integration of silicon micro probing systems into conventional gear measuring instruments (GMIs) allows fully automated measurements of external involute micro spur gears of normal modules smaller than 1 mm. This system, based on a silicon microprobe, has been developed and manufactured at the Institute for Microtechnology of the Technische Universität Braunschweig. The microprobe consists of a silicon sensor element and a stylus which is oriented perpendicularly to the sensor. The sensor is fabricated by means of silicon bulk micromachining. Its small dimensions of 6.5 mm × 6.5 mm allow compact mounting in a cartridge to facilitate the integration into a GMI. In this way, tactile measurements of 3D microstructures can be realized. To enable three-dimensional measurements with marginal forces, four Wheatstone bridges are built with diffused piezoresistors on the membrane of the sensor. On the reverse of the membrane, the stylus is glued perpendicularly to the sensor on a boss to transmit the probing forces to the sensor element during measurements. Sphere diameters smaller than 300 µm and shaft lengths of 5 mm as well as measurement forces from 10 µN enable the measurements of 3D microstructures. Such micro probing systems can be integrated into universal coordinate measuring machines and also into GMIs to extend their field of application. Practical measurements were carried out at the Physikalisch-Technische Bundesanstalt by qualifying the microprobes on a calibrated reference sphere to determine their sensitivity and their physical dimensions in volume. Following that, profile and helix measurements were carried out on a gear measurement standard with a module of 1 mm. The comparison of the measurements shows good agreement between the measurement values and the calibrated values. This result is a promising basis for the realization of smaller probe diameters for the tactile measurement of micro gears with smaller modules.
NASA Astrophysics Data System (ADS)
Civitani, M.; Ghigo, M.; Basso, S.; Proserpio, L.; Spiga, D.; Salmaso, B.; Pareschi, G.; Tagliaferri, G.; Burwitz, V.; Hartner, G.; Menz, B.; Bavdaz, M.; Wille, E.
2013-09-01
X-ray telescopes with very large collecting area, like the proposed International X-ray Observatory (IXO, with around 3 m2 at 1 keV), need to be composed of a large number high quality mirror segments, aiming at achieving an angular resolution better than 5 arcsec HEW (Half-Energy-Width). A possible technology to manufacture the modular elements that will compose the entire optical module, named X-ray Optical Units (XOUs), consists of stacking in Wolter-I configuration several layers of thin foils of borosilicate glass, previously formed by hot slumping. The XOUs are subsequently assembled to form complete multi-shell optics with Wolter-I geometry. The achievable global angular resolution of the optic relies on the required surface shape accuracy of slumped foils, on the smoothness of the mirror surfaces and on the correct integration and co-alignment of the mirror segments. The Brera Astronomical Observatory (INAF-OAB) is leading a study, supported by ESA, concerning the implementation of the IXO telescopes based on thin slumped glass foils. In addition to the opto-mechanical design, the study foresees the development of a direct hot slumping thin glass foils production technology. Moreover, an innovative assembly concept making use of Wolter-I counter-form moulds and glass reinforcing ribs is under development. The ribs connect pairs of consecutive foils in an XOU stack, playing a structural and a functional role. In fact, as the ribs constrain the foil profile to the correct shape during the bonding, they damp the low-frequency profile errors still present on the foil after slumping. A dedicated semirobotic Integration MAchine (IMA) has been realized to this scope and used to build a few integrated prototypes made of several layers of slumped plates. In this paper we provide an overview of the project, we report the results achieved so far, including full illumination intra-focus X-ray tests of the last integrated prototype that are compliant with a HEW of
Johnson, B M; Guan, X; Gammie, C F
2008-06-24
The descriptions of some of the numerical tests in our original paper are incomplete, making reproduction of the results difficult. We provide the missing details here. The relevant tests are described in section 4 of the original paper (Figures 8-11).
NASA Astrophysics Data System (ADS)
Baiardi, Alberto; Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien
2014-06-01
Two parallel theories including Franck-Condon, Herzberg-Teller and Duschinsky (i.e., mode mixing) effects, allowing different approximations for the description of excited state PES have been developed in order to simulate realistic, asymmetric, electronic spectra line-shapes taking into account the vibrational structure: the so-called sum-over-states or time-independent (TI) method and the alternative time-dependent (TD) approach, which exploits the properties of the Fourier transform. The integrated TI-TD procedure included within a general purpose QM code [1,2], allows to compute one photon absorption, fluorescence, phosphorescence, electronic circular dichroism, circularly polarized luminescence and resonance Raman spectra. Combining both approaches, which use a single set of starting data, permits to profit from their respective advantages and minimize their respective limits: the time-dependent route automatically includes all vibrational states and, possibly, temperature effects, while the time-independent route allows to identify and assign single vibronic transitions. Interpretation, analysis and assignment of experimental spectra based on integrated TI-TD vibronic computations will be illustrated for challenging cases of medium-sized open-shell systems in the gas and condensed phases with inclusion of leading anharmonic effects. 1. V. Barone, A. Baiardi, M. Biczysko, J. Bloino, C. Cappelli, F. Lipparini Phys. Chem. Chem. Phys, 14, 12404, (2012) 2. A. Baiardi, V. Barone, J. Bloino J. Chem. Theory Comput., 9, 4097-4115 (2013)
Numerical evaluation of the Feynman integral-over-paths in real and imaginary-time
NASA Astrophysics Data System (ADS)
Register, L. F.; Stroscio, M. A.; Littlejohn, M. A.
New techniques are described for Monte Carlo evaluation of the propagation of quantum mechanical systems in both real and imaginary-time using the Feynman integral-over-paths formulation of quantum mechanics. For imaginary-time calculations path translation is used to augment the technique of Lawande et. al. This simple-yet-powerful technique allows the equilibrium probability density to be accurately evaluated in the presence of multiple potential wells. It is shown that path translation permits the calculation of the unknown ground-state energy of one confining potential by comparison with the known ground-state energy of another. A double finite-square-well potential and a finite-square-well/parabolic-well pair are presented as examples. For real-time calculations, a weighted analytical averaging of the exponential in the classical action is performed over a region of paths. This "windowed action" has both real and imaginary components. The imaginary component yields an exponentially decaying probability for selecting paths, thereby providing a basis for the Monte Carlo evaluation of the real-time integral-over-paths. Examples of a wave-packet in a parabolic well and a wave-packet impinging upon a potential barrier are considered.
Koh, Kyung; Kwon, Hyun Joon; Park, Yang Sun; Kiemel, Tim; Miller, Ross H.; Kim, Yoon Hyuk; Shin, Joon-Ho; Shim, Jae Kun
2016-01-01
Humans detect changes in the air pressure and understand the surroundings through the auditory system. The sound humans perceive is composed of two distinct physical properties, frequency and intensity. However, our knowledge is limited how the brain perceives and combines these two properties simultaneously (i.e., intra-auditory integration), especially in relation to motor behaviors. Here, we investigated the effect of intra-auditory integration between the frequency and intensity components of auditory feedback on motor outputs in a constant finger-force production task. The hierarchical variability decomposition model previously developed was used to decompose motor performance into mathematically independent components each of which quantifies a distinct motor behavior such as consistency, repeatability, systematic error, within-trial synergy, or between-trial synergy. We hypothesized that feedback on two components of sound as a function of motor performance (frequency and intensity) would improve motor performance and multi-finger synergy compared to feedback on just one component (frequency or intensity). Subjects were instructed to match the reference force of 18 N with the sum of all finger forces (virtual finger or VF force) while listening to auditory feedback of their accuracy. Three experimental conditions were used: (i) condition F, where frequency changed; (ii) condition I, where intensity changed; (iii) condition FI, where both frequency and intensity changed. Motor performance was enhanced for the FI conditions as compared to either the F or I condition alone. The enhancement of motor performance was achieved mainly by the improved consistency and repeatability. However, the systematic error remained unchanged across conditions. Within- and between-trial synergies were also improved for the FI condition as compared to either the F or I condition alone. However, variability of individual finger forces for the FI condition was not significantly
Koh, Kyung; Kwon, Hyun Joon; Park, Yang Sun; Kiemel, Tim; Miller, Ross H; Kim, Yoon Hyuk; Shin, Joon-Ho; Shim, Jae Kun
2016-01-01
Humans detect changes in the air pressure and understand the surroundings through the auditory system. The sound humans perceive is composed of two distinct physical properties, frequency and intensity. However, our knowledge is limited how the brain perceives and combines these two properties simultaneously (i.e., intra-auditory integration), especially in relation to motor behaviors. Here, we investigated the effect of intra-auditory integration between the frequency and intensity components of auditory feedback on motor outputs in a constant finger-force production task. The hierarchical variability decomposition model previously developed was used to decompose motor performance into mathematically independent components each of which quantifies a distinct motor behavior such as consistency, repeatability, systematic error, within-trial synergy, or between-trial synergy. We hypothesized that feedback on two components of sound as a function of motor performance (frequency and intensity) would improve motor performance and multi-finger synergy compared to feedback on just one component (frequency or intensity). Subjects were instructed to match the reference force of 18 N with the sum of all finger forces (virtual finger or VF force) while listening to auditory feedback of their accuracy. Three experimental conditions were used: (i) condition F, where frequency changed; (ii) condition I, where intensity changed; (iii) condition FI, where both frequency and intensity changed. Motor performance was enhanced for the FI conditions as compared to either the F or I condition alone. The enhancement of motor performance was achieved mainly by the improved consistency and repeatability. However, the systematic error remained unchanged across conditions. Within- and between-trial synergies were also improved for the FI condition as compared to either the F or I condition alone. However, variability of individual finger forces for the FI condition was not significantly
NASA Astrophysics Data System (ADS)
Akamatsu, T.; Matsushita, M.; Murata, S.
1985-11-01
A two-parameter integral method is presented which is applicable even to separated boundary layers. The governing equation system, which consists of three moment equations of the boundary layer equation, is shown to be classifiable as a quasi-linear hyperbolic system under the assumed velocity profile function. The governing system is numerically solved by a dissipative finite difference scheme in order to capture a discontinuous solution associated with the singularity of unsteady separation. The spontaneous generation of singularity associated with unsteady separation is confirmed as the focusing of characteristics. The starting flows of a circular and an elliptic cylinder are considered as definite examples. This method is found to give excellent results in comparison with exact methods, not only for practically important boundary layer quantities such as displacement thickness or skin friction coefficient, but also for generation of separation singularity.
McCammon, R.B.; Finch, W.I.; Kork, J.O.; Bridges, N.J.
1994-01-01
An integrated data-directed numerical method has been developed to estimate the undiscovered mineral endowment within a given area. The method has been used to estimate the undiscovered uranium endowment in the San Juan Basin, New Mexico, U.S.A. The favorability of uranium concentration was evaluated in each of 2,068 cells defined within the Basin. Favorability was based on the correlated similarity of the geologic characteristics of each cell to the geologic characteristics of five area-related deposit models. Estimates of the undiscovered endowment for each cell were categorized according to deposit type, depth, and cutoff grade. The method can be applied to any mineral or energy commodity provided that the data collected reflect discovered endowment. ?? 1994 Oxford University Press.
NASA Technical Reports Server (NTRS)
Womble, M. E.; Potter, J. E.
1975-01-01
A prefiltering version of the Kalman filter is derived for both discrete and continuous measurements. The derivation consists of determining a single discrete measurement that is equivalent to either a time segment of continuous measurements or a set of discrete measurements. This prefiltering version of the Kalman filter easily handles numerical problems associated with rapid transients and ill-conditioned Riccati matrices. Therefore, the derived technique for extrapolating the Riccati matrix from one time to the next constitutes a new set of integration formulas which alleviate ill-conditioning problems associated with continuous Riccati equations. Furthermore, since a time segment of continuous measurements is converted into a single discrete measurement, Potter's square root formulas can be used to update the state estimate and its error covariance matrix. Therefore, if having the state estimate and its error covariance matrix at discrete times is acceptable, the prefilter extends square root filtering with all its advantages, to continuous measurement problems.
Inelastic, nonlinear analysis of stiffened shells of revolution by numerical integration
NASA Technical Reports Server (NTRS)
Levine, H. S.; Svalbonas, V.
1974-01-01
This paper describes the latest addition to the STARS system of computer programs, STARS-2P, for the plastic, large deflection analysis of axisymmetrically loaded shells of revolution. The STARS system uses a numerical integration scheme to solve the governing differential equations. Several unique features for shell of revolution programs that are included in the STARS-2P program are described. These include orthotropic nonlinear kinematic hardening theory, a variety of shell wall cross sections and discrete ring stiffeners, cyclic and nonproportional mechanical and thermal loading capability, the coupled axisymmetric large deflection elasto-plastic torsion problem, an extensive restart option, arbitrary branching capability, and the provision for the inelastic treatment of smeared stiffeners, isogrid, and waffle wall constructions. To affirm the validity of the results, comparisons with available theoretical and experimental data are presented.
Chocholousová, Jana; Feig, Michael
2006-04-30
Different integrator time steps in NVT and NVE simulations of protein and nucleic acid systems are tested with the GBMV (Generalized Born using Molecular Volume) and GBSW (Generalized Born with simple SWitching) methods. The simulation stability and energy conservation is investigated in relation to the agreement with the Poisson theory. It is found that very close agreement between generalized Born methods and the Poisson theory based on the commonly used sharp molecular surface definition results in energy drift and simulation artifacts in molecular dynamics simulation protocols with standard 2-fs time steps. New parameters are proposed for the GBMV method, which maintains very good agreement with the Poisson theory while providing energy conservation and stable simulations at time steps of 1 to 1.5 fs. PMID:16518883
Orbit determination based on meteor observations using numerical integration of equations of motion
NASA Astrophysics Data System (ADS)
Dmitriev, V.; Lupovka, V.; Gritsevich, M.
2014-07-01
We review the definitions and approaches to orbital-characteristics analysis applied to photographic or video ground-based observations of meteors. A number of camera networks dedicated to meteors registration were established all over the word, including USA, Canada, Central Europe, Australia, Spain, Finland and Poland. Many of these networks are currently operational. The meteor observations are conducted from different locations hosting the network stations. Each station is equipped with at least one camera for continuous monitoring of the firmament (except possible weather restrictions). For registered multi-station meteors, it is possible to accurately determine the direction and absolute value for the meteor velocity and thus obtain the topocentric radiant. Based on topocentric radiant one further determines the heliocentric meteor orbit. We aim to reduce total uncertainty in our orbit-determination technique, keeping it even less than the accuracy of observations. The additional corrections for the zenith attraction are widely in use and are implemented, for example, here [1]. We propose a technique for meteor-orbit determination with higher accuracy. We transform the topocentric radiant in inertial (J2000) coordinate system using the model recommended by IAU [2]. The main difference if compared to the existing orbit-determination techniques is integration of ordinary differential equations of motion instead of addition correction in visible velocity for zenith attraction. The attraction of the central body (the Sun), the perturbations by Earth, Moon and other planets of the Solar System, the Earth's flattening (important in the initial moment of integration, i.e. at the moment when a meteoroid enters the atmosphere), atmospheric drag may be optionally included in the equations. In addition, reverse integration of the same equations can be performed to analyze orbital evolution preceding to meteoroid's collision with Earth. To demonstrate the developed
NASA Astrophysics Data System (ADS)
Pilz, Tobias; Francke, Till; Bronstert, Axel
2016-04-01
Until today a large number of competing computer models has been developed to understand hydrological processes and to simulate and predict streamflow dynamics of rivers. This is primarily the result of a lack of a unified theory in catchment hydrology due to insufficient process understanding and uncertainties related to model development and application. Therefore, the goal of this study is to analyze the uncertainty structure of a process-based hydrological catchment model employing a multiple hypotheses approach. The study focuses on three major problems that have received only little attention in previous investigations. First, to estimate the impact of model structural uncertainty by employing several alternative representations for each simulated process. Second, explore the influence of landscape discretization and parameterization from multiple datasets and user decisions. Third, employ several numerical solvers for the integration of the governing ordinary differential equations to study the effect on simulation results. The generated ensemble of model hypotheses is then analyzed and the three sources of uncertainty compared against each other. To ensure consistency and comparability all model structures and numerical solvers are implemented within a single simulation environment. First results suggest that the selection of a sophisticated numerical solver for the differential equations positively affects simulation outcomes. However, already some simple and easy to implement explicit methods perform surprisingly well and need less computational efforts than more advanced but time consuming implicit techniques. There is general evidence that ambiguous and subjective user decisions form a major source of uncertainty and can greatly influence model development and application at all stages.
Ratcliff, Laura E; Grisanti, Luca; Genovese, Luigi; Deutsch, Thierry; Neumann, Tobias; Danilov, Denis; Wenzel, Wolfgang; Beljonne, David; Cornil, Jérôme
2015-05-12
A fast and accurate scheme has been developed to evaluate two key molecular parameters (on-site energies and transfer integrals) that govern charge transport in organic supramolecular architecture devices. The scheme is based on a constrained density functional theory (CDFT) approach implemented in the linear-scaling BigDFT code that exploits a wavelet basis set. The method has been applied to model disordered structures generated by force-field simulations. The role of the environment on the transport parameters has been taken into account by building large clusters around the active molecules involved in the charge transfer. PMID:26574411
Glenn, Jason; Chattopadhyay, Goutam; Edgington, Samantha F; Lange, Andrew E; Bock, James J; Mauskopf, Philip D; Lee, Adrian T
2002-01-01
Far-infrared to millimeter-wave bolometers designed to make astronomical observations are typically encased in integrating cavities at the termination of feedhorns or Winston cones. This photometer combination maximizes absorption of radiation, enables the absorber area to be minimized, and controls the directivity of absorption, thereby reducing susceptibility to stray light. In the next decade, arrays of hundreds of silicon nitride micromesh bolometers with planar architectures will be used in ground-based, suborbital, and orbital platforms for astronomy. The optimization of integrating cavity designs is required for achieving the highest possible sensitivity for these arrays. We report numerical simulations of the electromagnetic fields in integrating cavities with an infinite plane-parallel geometry formed by a solid reflecting backshort and the back surface of a feedhorn array block. Performance of this architecture for the bolometer array camera (Bolocam) for cosmology at a frequency of 214 GHz is investigated. We explore the sensitivity of absorption efficiency to absorber impedance and backshort location and the magnitude of leakage from cavities. The simulations are compared with experimental data from a room-temperature scale model and with the performance of Bolocam at a temperature of 300 mK. The main results of the simulations for Bolocam-type cavities are that (1) monochromatic absorptions as high as 95% are achievable with <1% cross talk between neighboring cavities, (2) the optimum absorber impedances are 400 ohms/sq, but with a broad maximum from approximately 150 to approximately 700 ohms/sq, and (3) maximum absorption is achieved with absorber diameters > or = 1.5 lambda. Good general agreement between the simulations and the experiments was found. PMID:11900429
NASA Technical Reports Server (NTRS)
LeClair, Andre
2011-01-01
An important first step in cryogenic propellant loading is the chilldown of transfer lines. During the chilldown of the transfer line, the flow is two-phase and unsteady, with solid to fluid heat transfer and therefore a coupled thermo-fluid analysis is necessary to model the system. This paper describes a numerical model of pipe chilldown that utilizes the Sinda/GFSSP Conjugate Integrator (SGCI). SGCI is a new analysis tool developed at NASA's Marshall Space Flight Center (MSFC). SGCI facilitates the solution of thermofluid problems in interconnected solid-fluid systems. The solid component of the system is modeled in MSC Patran and translated into an MSC Sinda thermal network model. The fluid component is modeled in GFSSP, the Generalized Fluid System Simulation Program. GFSSP is a general network flow solver developed at NASA/MSFC. GFSSP uses a finite-volume approach to model fluid systems that can include phase change, multiple species, fluid transients, and heat transfer to simple solid networks. SGCI combines the GFSSP Fortran code with the Sinda input file and compiles the integrated model. Sinda solves for the temperatures of the solid network, while GFSSP simultaneously solves the fluid network for pressure, temperature, and flow rate. The two networks are coupled by convection heat transfer from the solid wall to the cryogenic fluid. The model presented here is based on a series of experiments conducted in 1966 by the National Bureau of Standards (NBS). A vacuum-jacketed, 200 ft copper transfer line was chilled by liquid nitrogen and liquid hydrogen. The predictions of transient temperature profiles and chilldown time of the integrated Sinda/GFSSP model will be compared to the experimental measurements.
vom Saal, Frederick S.; Welshons, Wade V.
2016-01-01
There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources. PMID:25304273
vom Saal, Frederick S; Welshons, Wade V
2014-12-01
There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources. PMID:25304273
NASA Astrophysics Data System (ADS)
Hrubý, Jan
2012-04-01
Mathematical modeling of the non-equilibrium condensing transonic steam flow in the complex 3D geometry of a steam turbine is a demanding problem both concerning the physical concepts and the required computational power. Available accurate formulations of steam properties IAPWS-95 and IAPWS-IF97 require much computation time. For this reason, the modelers often accept the unrealistic ideal-gas behavior. Here we present a computation scheme based on a piecewise, thermodynamically consistent representation of the IAPWS-95 formulation. Density and internal energy are chosen as independent variables to avoid variable transformations and iterations. On the contrary to the previous Tabular Taylor Series Expansion Method, the pressure and temperature are continuous functions of the independent variables, which is a desirable property for the solution of the differential equations of the mass, energy, and momentum conservation for both phases.
Kim, Seungill; Kim, Myung-Shin; Kim, Yong-Min; Yeom, Seon-In; Cheong, Kyeongchae; Kim, Ki-Tae; Jeon, Jongbum; Kim, Sunggil; Kim, Do-Sun; Sohn, Seong-Han; Lee, Yong-Hwan; Choi, Doil
2015-01-01
The onion (Allium cepa L.) is one of the most widely cultivated and consumed vegetable crops in the world. Although a considerable amount of onion transcriptome data has been deposited into public databases, the sequences of the protein-coding genes are not accurate enough to be used, owing to non-coding sequences intermixed with the coding sequences. We generated a high-quality, annotated onion transcriptome from de novo sequence assembly and intensive structural annotation using the integrated structural gene annotation pipeline (ISGAP), which identified 54,165 protein-coding genes among 165,179 assembled transcripts totalling 203.0 Mb by eliminating the intron sequences. ISGAP performed reliable annotation, recognizing accurate gene structures based on reference proteins, and ab initio gene models of the assembled transcripts. Integrative functional annotation and gene-based SNP analysis revealed a whole biological repertoire of genes and transcriptomic variation in the onion. The method developed in this study provides a powerful tool for the construction of reference gene sets for organisms based solely on de novo transcriptome data. Furthermore, the reference genes and their variation described here for the onion represent essential tools for molecular breeding and gene cloning in Allium spp. PMID:25362073
Nilsson, Annica M.; Jonsson, Andreas; Jonsson, Jacob C.; Roos, Arne
2011-03-01
For most integrating sphere measurements, the difference in light distribution between a specular reference beam and a diffused sample beam can result in significant errors. The problem becomes especially pronounced in integrating spheres that include a port for reflectance or diffuse transmittance measurements. The port is included in many standard spectrophotometers to facilitate a multipurpose instrument, however, absorption around the port edge can result in a detected signal that is too low. The absorption effect is especially apparent for low-angle scattering samples, because a significant portion of the light is scattered directly onto that edge. In this paper, a method for more accurate transmittance measurements of low-angle light-scattering samples is presented. The method uses a standard integrating sphere spectrophotometer, and the problem with increased absorption around the port edge is addressed by introducing a diffuser between the sample and the integrating sphere during both reference and sample scan. This reduces the discrepancy between the two scans and spreads the scattered light over a greater portion of the sphere wall. The problem with multiple reflections between the sample and diffuser is successfully addressed using a correction factor. The method is tested for two patterned glass samples with low-angle scattering and in both cases the transmittance accuracy is significantly improved.
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Tsvetova, Elena; Penenko, Alexey
2015-04-01
The proposed method is considered on an example of hydrothermodynamics and atmospheric chemistry models [1,2]. In the development of the existing methods for constructing numerical schemes possessing the properties of total approximation for operators of multiscale process models, we have developed a new variational technique, which uses the concept of adjoint integrating factors. The technique is as follows. First, a basic functional of the variational principle (the integral identity that unites the model equations, initial and boundary conditions) is transformed using Lagrange's identity and the second Green's formula. As a result, the action of the operators of main problem in the space of state functions is transferred to the adjoint operators defined in the space of sufficiently smooth adjoint functions. By the choice of adjoint functions the order of the derivatives becomes lower by one than those in the original equations. We obtain a set of new balance relationships that take into account the sources and boundary conditions. Next, we introduce the decomposition of the model domain into a set of finite volumes. For multi-dimensional non-stationary problems, this technique is applied in the framework of the variational principle and schemes of decomposition and splitting on the set of physical processes for each coordinate directions successively at each time step. For each direction within the finite volume, the analytical solutions of one-dimensional homogeneous adjoint equations are constructed. In this case, the solutions of adjoint equations serve as integrating factors. The results are the hybrid discrete-analytical schemes. They have the properties of stability, approximation and unconditional monotony for convection-diffusion operators. These schemes are discrete in time and analytic in the spatial variables. They are exact in case of piecewise-constant coefficients within the finite volume and along the coordinate lines of the grid area in each
Integrating laboratory creep compaction data with numerical fault models: A Bayesian framework
Fitzenz, D.D.; Jalobeanu, A.; Hickman, S.H.
2007-01-01
We developed a robust Bayesian inversion scheme to plan and analyze laboratory creep compaction experiments. We chose a simple creep law that features the main parameters of interest when trying to identify rate-controlling mechanisms from experimental data. By integrating the chosen creep law or an approximation thereof, one can use all the data, either simultaneously or in overlapping subsets, thus making more complete use of the experiment data and propagating statistical variations in the data through to the final rate constants. Despite the nonlinearity of the problem, with this technique one can retrieve accurate estimates of both the stress exponent and the activation energy, even when the porosity time series data are noisy. Whereas adding observation points and/or experiments reduces the uncertainty on all parameters, enlarging the range of temperature or effective stress significantly reduces the covariance between stress exponent and activation energy. We apply this methodology to hydrothermal creep compaction data on quartz to obtain a quantitative, semiempirical law for fault zone compaction in the interseismic period. Incorporating this law into a simple direct rupture model, we find marginal distributions of the time to failure that are robust with respect to errors in the initial fault zone porosity. Copyright 2007 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Hsu, C. Y.
2014-12-01
In Taiwan, groundwater resources play a vital role on the regional supply management. Because the groundwater resources have been used without proper management in decades, several kinds of natural hazards, such as land subsidence, have been occurred. The Choshui alluvial fan is one of the hot spots in Taiwan. For sustainable management, accurately estimation of recharge is the most important information. The accuracy is highly related to the uncertainty of specific yield (Sy). Besides, because the value of Sy should be tested via a multi-well pumping test, the installation cost for the multi-well system limits the number of field tests. Therefore, the low spatial density of field test for Sy makes the estimation of recharge contains high uncertainty. The proposed method combines MODFLOW with a numerical integration procedure that calculates the gravity variations. Heterogeneous parameters (Sy) can be assigned to MODFLOW cells. An inverse procedure is then applied to interpret and identify the Sy value around the gravity station. The proposed methodology is applied to the Choshui alluvial fan, one of the most important groundwater basins in Taiwan. Three gravity measurement stations, "GS01", "GS02" and "GS03", were established. The location of GS01 is in the neighborhood of a groundwater observation well where pumping test data are available. The Sy value estimated from the gravitation measurements collected from GS01 compares favorably with that obtained from the traditional pumping test. The comparison verifies the correctness and accuracy of the proposed method. We then use the gravity measurements collected from GS02 and GS03 to estimate the Sy values in the areas where there exist no pumping test data. Using the estimated values obtained from gravity measurements, the spatial distribution of the values of specific yield for the aquifer can be further refined. The proposed method is a cost-saving and accuracy alternative for the estimation of specific yield in
Campbell, Kyle K.; Braile, Thomas
2016-01-01
The Philippine Islands are one of the most biologically diverse archipelagoes in the world. Current taxonomy, however, may underestimate levels of avian diversity and endemism in these islands. Although species limits can be difficult to determine among allopatric populations, quantitative methods for comparing phenotypic and genotypic data can provide useful metrics of divergence among populations and identify those that merit consideration for elevation to full species status. Using a conceptual approach that integrates genetic and phenotypic data, we compared populations among 48 species, estimating genetic divergence (p-distance) using the mtDNA marker ND2 and comparing plumage and morphometrics of museum study skins. Using conservative speciation thresholds, pairwise comparisons of genetic and phenotypic divergence suggested possible species-level divergences in more than half of the species studied (25 out of 48). In speciation process space, divergence routes were heterogeneous among taxa. Nearly all populations that surpassed high genotypic divergence thresholds were Passeriformes, and non-Passeriformes populations surpassed high phenotypic divergence thresholds more commonly than expected by chance. Overall, there was an apparent logarithmic increase in phenotypic divergence with respect to genetic divergence, suggesting the possibility that divergence among these lineages may initially be driven by divergent selection in this allopatric system. Also, genetic endemism was high among sampled islands. Higher taxonomy affected divergence in genotype and phenotype. Although broader lineage, genetic, phenotypic, and numeric sampling is needed to further explore heterogeneity among divergence processes and to accurately assess species-level diversity in these taxa, our results support the need for substantial taxonomic revisions among Philippine birds. The conservation implications are profound. PMID:27442510
Campbell, Kyle K; Braile, Thomas; Winker, Kevin
2016-01-01
The Philippine Islands are one of the most biologically diverse archipelagoes in the world. Current taxonomy, however, may underestimate levels of avian diversity and endemism in these islands. Although species limits can be difficult to determine among allopatric populations, quantitative methods for comparing phenotypic and genotypic data can provide useful metrics of divergence among populations and identify those that merit consideration for elevation to full species status. Using a conceptual approach that integrates genetic and phenotypic data, we compared populations among 48 species, estimating genetic divergence (p-distance) using the mtDNA marker ND2 and comparing plumage and morphometrics of museum study skins. Using conservative speciation thresholds, pairwise comparisons of genetic and phenotypic divergence suggested possible species-level divergences in more than half of the species studied (25 out of 48). In speciation process space, divergence routes were heterogeneous among taxa. Nearly all populations that surpassed high genotypic divergence thresholds were Passeriformes, and non-Passeriformes populations surpassed high phenotypic divergence thresholds more commonly than expected by chance. Overall, there was an apparent logarithmic increase in phenotypic divergence with respect to genetic divergence, suggesting the possibility that divergence among these lineages may initially be driven by divergent selection in this allopatric system. Also, genetic endemism was high among sampled islands. Higher taxonomy affected divergence in genotype and phenotype. Although broader lineage, genetic, phenotypic, and numeric sampling is needed to further explore heterogeneity among divergence processes and to accurately assess species-level diversity in these taxa, our results support the need for substantial taxonomic revisions among Philippine birds. The conservation implications are profound. PMID:27442510
NASA Astrophysics Data System (ADS)
Sotiropoulos, F.; Kang, S.; Chamorro, L. P.; Hill, C.
2011-12-01
The field of MHK energy is still in its infancy lagging approximately a decade or more behind the technology and development progress made in wind energy engineering. Marine environments are characterized by complex topography and three-dimensional (3D) turbulent flows, which can greatly affect the performance and structural integrity of MHK devices and impact the Levelized Cost of Energy (LCoE). Since the deployment of multi-turbine arrays is envisioned for field applications, turbine-to-turbine interactions and turbine-bathymetry interactions need to be understood and properly modeled so that MHK arrays can be optimized on a site specific basis. Furthermore, turbulence induced by MHK turbines alters and interacts with the nearby ecosystem and could potentially impact aquatic habitats. Increased turbulence in the wake of MHK devices can also change the shear stress imposed on the bed ultimately affecting the sediment transport and suspension processes in the wake of these structures. Such effects, however, remain today largely unexplored. In this work a science-based approach integrating state-of-the-art experimentation with high-resolution computational fluid dynamics is proposed as a powerful strategy for optimizing the performance of MHK devices and assessing environmental impacts. A novel numerical framework is developed for carrying out Large-Eddy Simulation (LES) in arbitrarily complex domains with embedded MHK devices. The model is able to resolve the geometrical complexity of real-life MHK devices using the Curvilinear Immersed Boundary (CURVIB) method along with a wall model for handling the flow near solid surfaces. Calculations are carried out for an axial flow hydrokinetic turbine mounted on the bed of rectangular open channel on a grid with nearly 200 million grid nodes. The approach flow corresponds to fully developed turbulent open channel flow and is obtained from a separate LES calculation. The specific case corresponds to that studied
NASA Astrophysics Data System (ADS)
Ahmed, Mahmoud; Eslamian, Morteza
2015-07-01
Laminar natural convection in differentially heated ( β = 0°, where β is the inclination angle), inclined ( β = 30° and 60°), and bottom-heated ( β = 90°) square enclosures filled with a nanofluid is investigated, using a two-phase lattice Boltzmann simulation approach. The effects of the inclination angle on Nu number and convection heat transfer coefficient are studied. The effects of thermophoresis and Brownian forces which create a relative drift or slip velocity between the particles and the base fluid are included in the simulation. The effect of thermophoresis is considered using an accurate and quantitative formula proposed by the authors. Some of the existing results on natural convection are erroneous due to using wrong thermophoresis models or simply ignoring the effect. Here we show that thermophoresis has a considerable effect on heat transfer augmentation in laminar natural convection. Our non-homogenous modeling approach shows that heat transfer in nanofluids is a function of the inclination angle and Ra number. It also reveals some details of flow behavior which cannot be captured by single-phase models. The minimum heat transfer rate is associated with β = 90° (bottom-heated) and the maximum heat transfer rate occurs in an inclination angle which varies with the Ra number.
Ahmed, Mahmoud; Eslamian, Morteza
2015-12-01
Laminar natural convection in differentially heated (β = 0°, where β is the inclination angle), inclined (β = 30° and 60°), and bottom-heated (β = 90°) square enclosures filled with a nanofluid is investigated, using a two-phase lattice Boltzmann simulation approach. The effects of the inclination angle on Nu number and convection heat transfer coefficient are studied. The effects of thermophoresis and Brownian forces which create a relative drift or slip velocity between the particles and the base fluid are included in the simulation. The effect of thermophoresis is considered using an accurate and quantitative formula proposed by the authors. Some of the existing results on natural convection are erroneous due to using wrong thermophoresis models or simply ignoring the effect. Here we show that thermophoresis has a considerable effect on heat transfer augmentation in laminar natural convection. Our non-homogenous modeling approach shows that heat transfer in nanofluids is a function of the inclination angle and Ra number. It also reveals some details of flow behavior which cannot be captured by single-phase models. The minimum heat transfer rate is associated with β = 90° (bottom-heated) and the maximum heat transfer rate occurs in an inclination angle which varies with the Ra number. PMID:26183389
NASA Astrophysics Data System (ADS)
Calvisi, Michael; Manmi, Kawa; Wang, Qianxi
2014-11-01
Ultrasound contrast agents (UCAs) are microbubbles stabilized with a shell typically of lipid, polymer, or protein and are emerging as a unique tool for noninvasive therapies ranging from gene delivery to tumor ablation. The nonspherical dynamics of contrast agents are thought to play an important role in both diagnostic and therapeutic applications, for example, causing the emission of subharmonic frequency components and enhancing the uptake of therapeutic agents across cell membranes and tissue interfaces. A three-dimensional model for nonspherical contrast agent dynamics based on the boundary integral method is presented. The effects of the encapsulating shell are approximated by adapting Hoff's model for thin-shell, spherical contrast agents to the nonspherical case. A high-quality mesh of the bubble surface is maintained by implementing a hybrid approach of the Lagrangian method and elastic mesh technique. Numerical analyses for the dynamics of UCAs in an infinite liquid and near a rigid wall are performed in parameter regimes of clinical relevance. The results show that the presence of a coating significantly reduces the oscillation amplitude and period, increases the ultrasound pressure amplitude required to incite jetting, and reduces the jet width and velocity.
NASA Astrophysics Data System (ADS)
Plant, N. G.; Long, J.; Dalyander, S.; Thompson, D.; Miselis, J. L.
2013-12-01
Natural resource and hazard management of barrier islands requires an understanding of geomorphic changes associated with long-term processes and storms. Uncertainty exists in understanding how long-term processes interact with the geomorphic changes caused by storms and the resulting perturbations of the long-term evolution trajectories. We use high-resolution data sets to initialize and correct high-fidelity numerical simulations of oceanographic forcing and resulting barrier island evolution. We simulate two years of observed storms to determine the individual and cumulative impacts of these events. Results are separated into cross-shore and alongshore components of sediment transport and compared with observed topographic and bathymetric changes during these time periods. The discrete island change induced by these storms is integrated with previous knowledge of long-term net alongshore sediment transport to project island evolution. The approach has been developed and tested using data collected at the Chandeleur Island chain off the coast of Louisiana (USA). The simulation time period included impacts from tropical and winter storms, as well as a human-induced perturbation associated with construction of a sand berm along the island shoreline. The predictions and observations indicated that storm and long-term processes both contribute to the migration, lowering, and disintegration of the artificial berm and natural island. Further analysis will determine the relative importance of cross-shore and alongshore sediment transport processes and the dominant time scales that drive each of these processes and subsequent island morphologic response.
Morphology and dynamics of piercement structures: an integrated laboratory and numerical study
NASA Astrophysics Data System (ADS)
Galland, Olivier; Gisler, Galen R.; Haug, Øystein T.
2013-04-01
Piercement structures are numerous in many geological settings, including pockmarks, mud volcanoes, hydrothermal vents, maar-diatreme volcanoes, volcanic conduits in stratovolcanoes, and kimberlite volcanoes. These piercement structures exhibit various shapes, from sub-vertical pipes piercing through the country rock to open and wide conduits, such as volcanic craters resulting from volcanic explosions (e.g., Mount Pinatubo). In this contribution, we present an integrated laboratory/numerical study to constrain the dynamics of piercement structures and unravel the processes that control their morphology. The laboratory experiments consist of a Hele-Shaw cell filled with a pack of cohesive fine-grained granular material, at the bottom of which a volume V t of pressurized air is injected at high velocity. As a result of air injection, a piercement structure develops through the medium, and its morphology and evolution is monitored with an ultra-fast camera. We varied systematically the thickness of the model h and the injection pressure P , and show that two morphologies of piercement structures develop: vertical and V-shaped conduits. In a phase diagram with h and P as horizontal and vertical axes, respectively, the two morphologies group into two distinct domains separated by a transition line of critical slope P-h. This phase diagram shows that vertical conduits form for high P /low h, whereas V-shaped conduits form for low P /high h. 2D numerical simulations are performed using Sage, a finite volume hydrocode developed at the Los Alamos National Laboratory. We ran simulations and varied systematically the input pressure P and the strength of the country rock T . Our simulations produced three types of piercement structures: vertical, sub-horizontal and V-shaped conduits. In a phase diagram with T and P as horizontal and vertical axes, respectively, the three morphologies group into distinct domains separated by transition lines of critical slopes P-T . Vertical
NASA Technical Reports Server (NTRS)
Chan, William M.
1992-01-01
The following papers are presented: (1) numerical methods for the simulation of complex multi-body flows with applications for the Integrated Space Shuttle vehicle; (2) a generalized scheme for 3-D hyperbolic grid generation; (3) collar grids for intersecting geometric components within the Chimera overlapped grid scheme; and (4) application of the Chimera overlapped grid scheme to simulation of Space Shuttle ascent flows.
NASA Technical Reports Server (NTRS)
Banyukevich, A.; Ziolkovski, K.
1975-01-01
A number of hybrid methods for solving Cauchy problems are described on the basis of an evaluation of advantages of single and multiple-point numerical integration methods. The selection criterion is the principle of minimizing computer time. The methods discussed include the Nordsieck method, the Bulirsch-Stoer extrapolation method, and the method of recursive Taylor-Steffensen power series.
ERIC Educational Resources Information Center
Bonotto, C.
1995-01-01
Attempted to verify knowledge regarding decimal and rational numbers in children ages 10-14. Discusses how pupils can receive and assimilate extensions of the number system from natural numbers to decimals and fractions and later can integrate this extension into a single and coherent numerical structure. (Author/MKR)
NASA Technical Reports Server (NTRS)
Lundberg, J. B.; Feulner, M. R.; Abusali, P. A. M.; Ho, C. S.
1991-01-01
The method of modified back differences, a technique that significantly reduces the numerical integration errors associated with crossing shadow boundaries using a fixed-mesh multistep integrator without a significant increase in computer run time, is presented. While Hubbard's integral approach can produce significant improvements to the trajectory solution, the interpolation method provides the best overall results. It is demonstrated that iterating on the point mass term correction is also important for achieving the best overall results. It is also shown that the method of modified back differences can be implemented with only a small increase in execution time.
NASA Astrophysics Data System (ADS)
Furlong, Kevin P.; Govers, Rob; Herman, Matthew
2016-04-01
last for decades after a major event (e.g. Alaska 1964) We have integrated the observed patterns of upper-plate displacements (and deformation) with models of subduction zone evolution that allow us to incorporate both the transient behavior associated with post-earthquake viscous re-equilibration and the underlying long term, relatively constant elastic strain accumulation. Modeling the earthquake cycle through the use of a visco-elastic numerical model over numerous earthquake cycles, we have developed a framework model for the megathrust cycle that is constrained by observations made at a variety of plate boundary zones at different stages in their earthquake cycle (see paper by Govers et al., this meeting). Our results indicate that the observed patterns of co- and post- and inter-seismic deformation are largely controlled by interplay between elastic and viscous processes. Observed displacements represent the competition between steady elastic-strain accumulation driven by plate boundary coupling, and post-earthquake viscous behavior in response to the coseismic loading of the system by the rapid elastic rebound. The application of this framework model to observations from subduction zone observatories points up the dangers of simply extrapolating current deformation observations to the overall strain accumulation state of the subduction zoned allows us to develop improved assessments of the slip deficit accumulating within the seismogenic zone, and the near-future earthquake potential of different segments of the subduction plate boundary.