Science.gov

Sample records for accurate numerical integration

  1. Analysis and accurate numerical solutions of the integral equation derived from the linearized BGKW equation for the steady Couette flow

    NASA Astrophysics Data System (ADS)

    Jiang, Shidong; Luo, Li-Shi

    2016-07-01

    The integral equation for the flow velocity u (x ; k) in the steady Couette flow derived from the linearized Bhatnagar-Gross-Krook-Welander kinetic equation is studied in detail both theoretically and numerically in a wide range of the Knudsen number k between 0.003 and 100.0. First, it is shown that the integral equation is a Fredholm equation of the second kind in which the norm of the compact integral operator is less than 1 on Lp for any 1 ≤ p ≤ ∞ and thus there exists a unique solution to the integral equation via the Neumann series. Second, it is shown that the solution is logarithmically singular at the endpoints. More precisely, if x = 0 is an endpoint, then the solution can be expanded as a double power series of the form ∑n=0∞∑m=0∞cn,mxn(xln ⁡ x) m about x = 0 on a small interval x ∈ (0 , a) for some a > 0. And third, a high-order adaptive numerical algorithm is designed to compute the solution numerically to high precision. The solutions for the flow velocity u (x ; k), the stress Pxy (k), and the half-channel mass flow rate Q (k) are obtained in a wide range of the Knudsen number 0.003 ≤ k ≤ 100.0; and these solutions are accurate for at least twelve significant digits or better, thus they can be used as benchmark solutions.

  2. Numerical Integration

    ERIC Educational Resources Information Center

    Sozio, Gerry

    2009-01-01

    Senior secondary students cover numerical integration techniques in their mathematics courses. In particular, students would be familiar with the "midpoint rule," the elementary "trapezoidal rule" and "Simpson's rule." This article derives these techniques by methods which secondary students may not be familiar with and an approach that…

  3. Accurate Evaluation of Quantum Integrals

    NASA Technical Reports Server (NTRS)

    Galant, David C.; Goorvitch, D.

    1994-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schr\\"{o}dinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  4. Accurate numerical solutions of conservative nonlinear oscillators

    NASA Astrophysics Data System (ADS)

    Khan, Najeeb Alam; Nasir Uddin, Khan; Nadeem Alam, Khan

    2014-12-01

    The objective of this paper is to present an investigation to analyze the vibration of a conservative nonlinear oscillator in the form u" + lambda u + u^(2n-1) + (1 + epsilon^2 u^(4m))^(1/2) = 0 for any arbitrary power of n and m. This method converts the differential equation to sets of algebraic equations and solve numerically. We have presented for three different cases: a higher order Duffing equation, an equation with irrational restoring force and a plasma physics equation. It is also found that the method is valid for any arbitrary order of n and m. Comparisons have been made with the results found in the literature the method gives accurate results.

  5. Rythmos Numerical Integration Package

    SciTech Connect

    Coffey, Todd S.; Bartlett, Roscoe A.

    2006-09-01

    Rythmos numerically integrates transient differential equations. The differential equations can be explicit or implicit ordinary differential equations ofr formulated as fully implicit differential-algebraic equations. Methods include backward Euler, forward Euler, explicit Runge-Kutta, and implicit BDF at this time. Native support for operator split methods and strict modularity are strong design goals. Forward sensitivity computations will be included in the first release with adjoint sensitivities coming in the near future. Rythmos heavily relies on Thyra for linear algebra and nonlinear solver interfaces to AztecOO, Amesos, IFPack, and NOX in Tilinos. Rythmos is specially suited for stiff differential equations and thos applictions where operator split methods have a big advantage, e.g. Computational fluid dynamics, convection-diffusion equations, etc.

  6. Rythmos Numerical Integration Package

    2006-09-01

    Rythmos numerically integrates transient differential equations. The differential equations can be explicit or implicit ordinary differential equations ofr formulated as fully implicit differential-algebraic equations. Methods include backward Euler, forward Euler, explicit Runge-Kutta, and implicit BDF at this time. Native support for operator split methods and strict modularity are strong design goals. Forward sensitivity computations will be included in the first release with adjoint sensitivities coming in the near future. Rythmos heavily relies on Thyra formore » linear algebra and nonlinear solver interfaces to AztecOO, Amesos, IFPack, and NOX in Tilinos. Rythmos is specially suited for stiff differential equations and thos applictions where operator split methods have a big advantage, e.g. Computational fluid dynamics, convection-diffusion equations, etc.« less

  7. Higher order accurate partial implicitization: An unconditionally stable fourth-order-accurate explicit numerical technique

    NASA Technical Reports Server (NTRS)

    Graves, R. A., Jr.

    1975-01-01

    The previously obtained second-order-accurate partial implicitization numerical technique used in the solution of fluid dynamic problems was modified with little complication to achieve fourth-order accuracy. The Von Neumann stability analysis demonstrated the unconditional linear stability of the technique. The order of the truncation error was deduced from the Taylor series expansions of the linearized difference equations and was verified by numerical solutions to Burger's equation. For comparison, results were also obtained for Burger's equation using a second-order-accurate partial-implicitization scheme, as well as the fourth-order scheme of Kreiss.

  8. Numerical integration of subtraction terms

    NASA Astrophysics Data System (ADS)

    Seth, Satyajit; Weinzierl, Stefan

    2016-06-01

    Numerical approaches to higher-order calculations often employ subtraction terms, both for the real emission and the virtual corrections. These subtraction terms have to be added back. In this paper we show that at NLO the real subtraction terms, the virtual subtraction terms, the integral representations of the field renormalization constants and—in the case of initial-state partons—the integral representation for the collinear counterterm can be grouped together to give finite integrals, which can be evaluated numerically. This is useful for an extension towards next-to-next-to-leading order.

  9. The development of accurate and efficient methods of numerical quadrature

    NASA Technical Reports Server (NTRS)

    Feagin, T.

    1973-01-01

    Some new methods for performing numerical quadrature of an integrable function over a finite interval are described. Each method provides a sequence of approximations of increasing order to the value of the integral. Each approximation makes use of all previously computed values of the integrand. The points at which new values of the integrand are computed are selected in such a way that the order of the approximation is maximized. The methods are compared with the quadrature methods of Clenshaw and Curtis, Gauss, Patterson, and Romberg using several examples.

  10. Accurate complex scaling of three dimensional numerical potentials

    SciTech Connect

    Cerioni, Alessandro; Genovese, Luigi; Duchemin, Ivan; Deutsch, Thierry

    2013-05-28

    The complex scaling method, which consists in continuing spatial coordinates into the complex plane, is a well-established method that allows to compute resonant eigenfunctions of the time-independent Schroedinger operator. Whenever it is desirable to apply the complex scaling to investigate resonances in physical systems defined on numerical discrete grids, the most direct approach relies on the application of a similarity transformation to the original, unscaled Hamiltonian. We show that such an approach can be conveniently implemented in the Daubechies wavelet basis set, featuring a very promising level of generality, high accuracy, and no need for artificial convergence parameters. Complex scaling of three dimensional numerical potentials can be efficiently and accurately performed. By carrying out an illustrative resonant state computation in the case of a one-dimensional model potential, we then show that our wavelet-based approach may disclose new exciting opportunities in the field of computational non-Hermitian quantum mechanics.

  11. Fresnel Integral Equations: Numerical Properties

    SciTech Connect

    Adams, R J; Champagne, N J II; Davis, B A

    2003-07-22

    A spatial-domain solution to the problem of electromagnetic scattering from a dielectric half-space is outlined. The resulting half-space operators are referred to as Fresnel surface integral operators. When used as preconditioners for nonplanar geometries, the Fresnel operators yield surface Fresnel integral equations (FIEs) which are stable with respect to dielectric constant, discretization, and frequency. Numerical properties of the formulations are discussed.

  12. Cuba: Multidimensional numerical integration library

    NASA Astrophysics Data System (ADS)

    Hahn, Thomas

    2016-08-01

    The Cuba library offers four independent routines for multidimensional numerical integration: Vegas, Suave, Divonne, and Cuhre. The four algorithms work by very different methods, and can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Their invocation is very similar, making it easy to cross-check by substituting one method by another. For further safeguarding, the output is supplemented by a chi-square probability which quantifies the reliability of the error estimate.

  13. Accurate Critical Stress Intensity Factor Griffith Crack Theory Measurements by Numerical Techniques

    PubMed Central

    Petersen, Richard C.

    2014-01-01

    Critical stress intensity factor (KIc) has been an approximation for fracture toughness using only load-cell measurements. However, artificial man-made cracks several orders of magnitude longer and wider than natural flaws have required a correction factor term (Y) that can be up to about 3 times the recorded experimental value [1-3]. In fact, over 30 years ago a National Academy of Sciences advisory board stated that empirical KIc testing was of serious concern and further requested that an accurate bulk fracture toughness method be found [4]. Now that fracture toughness can be calculated accurately by numerical integration from the load/deflection curve as resilience, work of fracture (WOF) and strain energy release (SIc) [5, 6], KIc appears to be unnecessary. However, the large body of previous KIc experimental test results found in the literature offer the opportunity for continued meta analysis with other more practical and accurate fracture toughness results using energy methods and numerical integration. Therefore, KIc is derived from the classical Griffith Crack Theory [6] to include SIc as a more accurate term for strain energy release rate (𝒢Ic), along with crack surface energy (γ), crack length (a), modulus (E), applied stress (σ), Y, crack-tip plastic zone defect region (rp) and yield strength (σys) that can all be determined from load and deflection data. Polymer matrix discontinuous quartz fiber-reinforced composites to accentuate toughness differences were prepared for flexural mechanical testing comprising of 3 mm fibers at different volume percentages from 0-54.0 vol% and at 28.2 vol% with different fiber lengths from 0.0-6.0 mm. Results provided a new correction factor and regression analyses between several numerical integration fracture toughness test methods to support KIc results. Further, bulk KIc accurate experimental values are compared with empirical test results found in literature. Also, several fracture toughness mechanisms

  14. Efficient numerical evaluation of Feynman integrals

    NASA Astrophysics Data System (ADS)

    Li, Zhao; Wang, Jian; Yan, Qi-Shu; Zhao, Xiaoran

    2016-03-01

    Feynman loop integrals are a key ingredient for the calculation of higher order radiation effects, and are responsible for reliable and accurate theoretical prediction. We improve the efficiency of numerical integration in sector decomposition by implementing a quasi-Monte Carlo method associated with the CUDA/GPU technique. For demonstration we present the results of several Feynman integrals up to two loops in both Euclidean and physical kinematic regions in comparison with those obtained from FIESTA3. It is shown that both planar and non-planar two-loop master integrals in the physical kinematic region can be evaluated in less than half a minute with accuracy, which makes the direct numerical approach viable for precise investigation of higher order effects in multi-loop processes, e.g. the next-to-leading order QCD effect in Higgs pair production via gluon fusion with a finite top quark mass. Supported by the Natural Science Foundation of China (11305179 11475180), Youth Innovation Promotion Association, CAS, IHEP Innovation (Y4545170Y2), State Key Lab for Electronics and Particle Detectors, Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y4KF061CJ1), Cluster of Excellence Precision Physics, Fundamental Interactions and Structure of Matter (PRISMA-EXC 1098)

  15. Fast and Accurate Learning When Making Discrete Numerical Estimates.

    PubMed

    Sanborn, Adam N; Beierholm, Ulrik R

    2016-04-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  16. Fast and Accurate Learning When Making Discrete Numerical Estimates

    PubMed Central

    Sanborn, Adam N.; Beierholm, Ulrik R.

    2016-01-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  17. Accurate numerical solution of compressible, linear stability equations

    NASA Technical Reports Server (NTRS)

    Malik, M. R.; Chuang, S.; Hussaini, M. Y.

    1982-01-01

    The present investigation is concerned with a fourth order accurate finite difference method and its application to the study of the temporal and spatial stability of the three-dimensional compressible boundary layer flow on a swept wing. This method belongs to the class of compact two-point difference schemes discussed by White (1974) and Keller (1974). The method was apparently first used for solving the two-dimensional boundary layer equations. Attention is given to the governing equations, the solution technique, and the search for eigenvalues. A general purpose subroutine is employed for solving a block tridiagonal system of equations. The computer time can be reduced significantly by exploiting the special structure of two matrices.

  18. Numerical Integration: One Step at a Time

    ERIC Educational Resources Information Center

    Yang, Yajun; Gordon, Sheldon P.

    2016-01-01

    This article looks at the effects that adding a single extra subdivision has on the level of accuracy of some common numerical integration routines. Instead of automatically doubling the number of subdivisions for a numerical integration rule, we investigate what happens with a systematic method of judiciously selecting one extra subdivision for…

  19. Accurate and efficient spin integration for particle accelerators

    NASA Astrophysics Data System (ADS)

    Abell, Dan T.; Meiser, Dominic; Ranjbar, Vahid H.; Barber, Desmond P.

    2015-02-01

    Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code gpuSpinTrack. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations. We evaluate their performance and accuracy in quantitative detail for individual elements as well as for the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.

  20. An Integrative Theory of Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert; Lortie-Forgues, Hugues

    2014-01-01

    Understanding of numerical development is growing rapidly, but the volume and diversity of findings can make it difficult to perceive any coherence in the process. The integrative theory of numerical development posits that a coherent theme is present, however--progressive broadening of the set of numbers whose magnitudes can be accurately…

  1. Numerical solution of boundary-integral equations for molecular electrostatics.

    PubMed

    Bardhan, Jaydeep P

    2009-03-01

    Numerous molecular processes, such as ion permeation through channel proteins, are governed by relatively small changes in energetics. As a result, theoretical investigations of these processes require accurate numerical methods. In the present paper, we evaluate the accuracy of two approaches to simulating boundary-integral equations for continuum models of the electrostatics of solvation. The analysis emphasizes boundary-element method simulations of the integral-equation formulation known as the apparent-surface-charge (ASC) method or polarizable-continuum model (PCM). In many numerical implementations of the ASC/PCM model, one forces the integral equation to be satisfied exactly at a set of discrete points on the boundary. We demonstrate in this paper that this approach to discretization, known as point collocation, is significantly less accurate than an alternative approach known as qualocation. Furthermore, the qualocation method offers this improvement in accuracy without increasing simulation time. Numerical examples demonstrate that electrostatic part of the solvation free energy, when calculated using the collocation and qualocation methods, can differ significantly; for a polypeptide, the answers can differ by as much as 10 kcal/mol (approximately 4% of the total electrostatic contribution to solvation). The applicability of the qualocation discretization to other integral-equation formulations is also discussed, and two equivalences between integral-equation methods are derived. PMID:19275391

  2. On constructing accurate approximations of first integrals for difference equations

    NASA Astrophysics Data System (ADS)

    Rafei, M.; Van Horssen, W. T.

    2013-04-01

    In this paper, a perturbation method based on invariance factors and multiple scales will be presented for weakly nonlinear, regularly perturbed systems of ordinary difference equations. Asymptotic approximations of first integrals will be constructed on long iteration-scales, that is, on iteration-scales of order ɛ-1, where ɛ is a small parameter. It will be shown that all invariance factors have to satisfy a functional equation. To show how this perturbation method works, the method is applied to a Van der Pol equation, and a Rayleigh equation. It will be explicitly shown for the first time in the literature how these multiple scales should be introduced for systems of difference equations to obtain very accurate approximations of first integrals on long iteration-scales.

  3. Numerical computation of 2D Sommerfeld integrals - Decomposition of the angular integral

    NASA Astrophysics Data System (ADS)

    Dvorak, Steven L.; Kuester, Edward F.

    1992-02-01

    The computational efficiency of the 2D Sommerfeld integrals is shown to undergo improvement through the discovery of novel ways to compute the inner angular integral in polar representations. It is shown that the angular integral can be decomposed into a finite number of incomplete Lipschitz-Hankel integrals; these can in turn be calculated through a series of expansions, so that the angular integral can be computed by summing a series rather than applying a standard numerical integration algorithm. The technique is most efficient and accurate when piecewise-sinusoidal basis functions are employed to analyze a printed strip-dipole antenna in a layered medium.

  4. Orbital Advection by Interpolation: A Fast and Accurate Numerical Scheme for Super-Fast MHD Flows

    SciTech Connect

    Johnson, B M; Guan, X; Gammie, F

    2008-04-11

    In numerical models of thin astrophysical disks that use an Eulerian scheme, gas orbits supersonically through a fixed grid. As a result the timestep is sharply limited by the Courant condition. Also, because the mean flow speed with respect to the grid varies with position, the truncation error varies systematically with position. For hydrodynamic (unmagnetized) disks an algorithm called FARGO has been developed that advects the gas along its mean orbit using a separate interpolation substep. This relaxes the constraint imposed by the Courant condition, which now depends only on the peculiar velocity of the gas, and results in a truncation error that is more nearly independent of position. This paper describes a FARGO-like algorithm suitable for evolving magnetized disks. Our method is second order accurate on a smooth flow and preserves {del} {center_dot} B = 0 to machine precision. The main restriction is that B must be discretized on a staggered mesh. We give a detailed description of an implementation of the code and demonstrate that it produces the expected results on linear and nonlinear problems. We also point out how the scheme might be generalized to make the integration of other supersonic/super-fast flows more efficient. Although our scheme reduces the variation of truncation error with position, it does not eliminate it. We show that the residual position dependence leads to characteristic radial variations in the density over long integrations.

  5. Generation of accurate integral surfaces in time-dependent vector fields.

    PubMed

    Garth, Christoph; Krishnan, Han; Tricoche, Xavier; Bobach, Tom; Joy, Kenneth I

    2008-01-01

    We present a novel approach for the direct computation of integral surfaces in time-dependent vector fields. As opposed to previous work, which we analyze in detail, our approach is based on a separation of integral surface computation into two stages: surface approximation and generation of a graphical representation. This allows us to overcome several limitations of existing techniques. We first describe an algorithm for surface integration that approximates a series of time lines using iterative refinement and computes a skeleton of the integral surface. In a second step, we generate a well-conditioned triangulation. Our approach allows a highly accurate treatment of very large time-varying vector fields in an efficient, streaming fashion. We examine the properties of the presented methods on several example datasets and perform a numerical study of its correctness and accuracy. Finally, we investigate some visualization aspects of integral surfaces. PMID:18988990

  6. Accurate object tracking system by integrating texture and depth cues

    NASA Astrophysics Data System (ADS)

    Chen, Ju-Chin; Lin, Yu-Hang

    2016-03-01

    A robust object tracking system that is invariant to object appearance variations and background clutter is proposed. Multiple instance learning with a boosting algorithm is applied to select discriminant texture information between the object and background data. Additionally, depth information, which is important to distinguish the object from a complicated background, is integrated. We propose two depth-based models that can compensate texture information to cope with both appearance variants and background clutter. Moreover, in order to reduce the risk of drifting problem increased for the textureless depth templates, an update mechanism is proposed to select more precise tracking results to avoid incorrect model updates. In the experiments, the robustness of the proposed system is evaluated and quantitative results are provided for performance analysis. Experimental results show that the proposed system can provide the best success rate and has more accurate tracking results than other well-known algorithms.

  7. Effects of aliasing on numerical integration.

    SciTech Connect

    Edwards, Timothy S.

    2005-02-01

    During the course of processing acceleration data from mechanical systems it is often desirable to integrate the data to obtain velocity or displacement waveforms. However, those who have attempted these operations may be painfully aware that the integrated records often yield unrealistic residual values. This is true whether the data has been obtained experimentally or through numerical simulation such as Runge-Kutta integration or the explicit finite element method. In the case of experimentally obtained data, the integration errors are usually blamed on accelerometer zero shift or amplifier saturation. In the case of simulation data, incorrect integrations are often incorrectly blamed on the integration algorithm itself. This work demonstrates that seemingly small aliased content can cause appreciable errors in the integrated waveforms and explores the unavoidable source of aliasing in both experiment and simulation-the sampling operation. Numerical analysts are often puzzled as to why the integrated acceleration from their simulation does not match the displacement output from the same simulation. This work shows that these strange results can be caused by aliasing induced by interpolation of the model output during sampling regularization.

  8. Automatic numerical integration methods for Feynman integrals through 3-loop

    NASA Astrophysics Data System (ADS)

    de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Olagbemi, O.

    2015-05-01

    We give numerical integration results for Feynman loop diagrams through 3-loop such as those covered by Laporta [1]. The methods are based on automatic adaptive integration, using iterated integration and extrapolation with programs from the QUADPACK package, or multivariate techniques from the ParInt package. The Dqags algorithm from QuadPack accommodates boundary singularities of fairly general types. PARINT is a package for multivariate integration layered over MPI (Message Passing Interface), which runs on clusters and incorporates advanced parallel/distributed techniques such as load balancing among processes that may be distributed over a network of nodes. Results are included for 3-loop self-energy diagrams without IR (infra-red) or UV (ultra-violet) singularities. A procedure based on iterated integration and extrapolation yields a novel method of numerical regularization for integrals with UV terms, and is applied to a set of 2-loop self-energy diagrams with UV singularities.

  9. Evaluation of the Time-Derivative Coupling for Accurate Electronic State Transition Probabilities from Numerical Simulations.

    PubMed

    Meek, Garrett A; Levine, Benjamin G

    2014-07-01

    Spikes in the time-derivative coupling (TDC) near surface crossings make the accurate integration of the time-dependent Schrödinger equation in nonadiabatic molecular dynamics simulations a challenge. To address this issue, we present an approximation to the TDC based on a norm-preserving interpolation (NPI) of the adiabatic electronic wave functions within each time step. We apply NPI and two other schemes for computing the TDC in numerical simulations of the Landau-Zener model, comparing the simulated transfer probabilities to the exact solution. Though NPI does not require the analytical calculation of nonadiabatic coupling matrix elements, it consistently yields unsigned population transfer probability errors of ∼0.001, whereas analytical calculation of the TDC yields errors of 0.0-1.0 depending on the time step, the offset of the maximum in the TDC from the beginning of the time step, and the coupling strength. The approximation of Hammes-Schiffer and Tully yields errors intermediate between NPI and the analytical scheme. PMID:26279558

  10. Highly Parallel, High-Precision Numerical Integration

    SciTech Connect

    Bailey, David H.; Borwein, Jonathan M.

    2005-04-22

    This paper describes a scheme for rapidly computing numerical values of definite integrals to very high accuracy, ranging from ordinary machine precision to hundreds or thousands of digits, even for functions with singularities or infinite derivatives at endpoints. Such a scheme is of interest not only in computational physics and computational chemistry, but also in experimental mathematics, where high-precision numerical values of definite integrals can be used to numerically discover new identities. This paper discusses techniques for a parallel implementation of this scheme, then presents performance results for 1-D and 2-D test suites. Results are also given for a certain problem from mathematical physics, which features a difficult singularity, confirming a conjecture to 20,000 digit accuracy. The performance rate for this latter calculation on 1024 CPUs is 690 Gflop/s. We believe that this and one other 20,000-digit integral evaluation that we report are the highest-precision non-trivial numerical integrations performed to date.

  11. Numerical multi-loop integrals and applications

    NASA Astrophysics Data System (ADS)

    Freitas, A.

    2016-09-01

    Higher-order radiative corrections play an important role in precision studies of the electroweak and Higgs sector, as well as for the detailed understanding of large backgrounds to new physics searches. For corrections beyond the one-loop level and involving many independent mass and momentum scales, it is in general not possible to find analytic results, so that one needs to resort to numerical methods instead. This article presents an overview of a variety of numerical loop integration techniques, highlighting their range of applicability, suitability for automatization, and numerical precision and stability. In a second part of this article, the application of numerical loop integration methods in the area of electroweak precision tests is illustrated. Numerical methods were essential for obtaining full two-loop predictions for the most important precision observables within the Standard Model. The theoretical foundations for these corrections will be described in some detail, including aspects of the renormalization, resummation of leading log contributions, and the evaluation of the theory uncertainty from missing higher orders.

  12. An Integrative Method for Accurate Comparative Genome Mapping

    PubMed Central

    Swidan, Firas; Rocha, Eduardo P. C; Shmoish, Michael; Pinter, Ron Y

    2006-01-01

    We present MAGIC, an integrative and accurate method for comparative genome mapping. Our method consists of two phases: preprocessing for identifying “maximal similar segments,” and mapping for clustering and classifying these segments. MAGIC's main novelty lies in its biologically intuitive clustering approach, which aims towards both calculating reorder-free segments and identifying orthologous segments. In the process, MAGIC efficiently handles ambiguities resulting from duplications that occurred before the speciation of the considered organisms from their most recent common ancestor. We demonstrate both MAGIC's robustness and scalability: the former is asserted with respect to its initial input and with respect to its parameters' values. The latter is asserted by applying MAGIC to distantly related organisms and to large genomes. We compare MAGIC to other comparative mapping methods and provide detailed analysis of the differences between them. Our improvements allow a comprehensive study of the diversity of genetic repertoires resulting from large-scale mutations, such as indels and duplications, including explicitly transposable and phagic elements. The strength of our method is demonstrated by detailed statistics computed for each type of these large-scale mutations. MAGIC enabled us to conduct a comprehensive analysis of the different forces shaping prokaryotic genomes from different clades, and to quantify the importance of novel gene content introduced by horizontal gene transfer relative to gene duplication in bacterial genome evolution. We use these results to investigate the breakpoint distribution in several prokaryotic genomes. PMID:16933978

  13. Numerical methods for engine-airframe integration

    SciTech Connect

    Murthy, S.N.B.; Paynter, G.C.

    1986-01-01

    Various papers on numerical methods for engine-airframe integration are presented. The individual topics considered include: scientific computing environment for the 1980s, overview of prediction of complex turbulent flows, numerical solutions of the compressible Navier-Stokes equations, elements of computational engine/airframe integrations, computational requirements for efficient engine installation, application of CAE and CFD techniques to complete tactical missile design, CFD applications to engine/airframe integration, and application of a second-generation low-order panel methods to powerplant installation studies. Also addressed are: three-dimensional flow analysis of turboprop inlet and nacelle configurations, application of computational methods to the design of large turbofan engine nacelles, comparison of full potential and Euler solution algorithms for aeropropulsive flow field computations, subsonic/transonic, supersonic nozzle flows and nozzle integration, subsonic/transonic prediction capabilities for nozzle/afterbody configurations, three-dimensional viscous design methodology of supersonic inlet systems for advanced technology aircraft, and a user's technology assessment.

  14. Numerical integration for ab initio many-electron self energy calculations within the GW approximation

    SciTech Connect

    Liu, Fang; Lin, Lin; Vigil-Fowler, Derek; Lischner, Johannes; Kemper, Alexander F.; Sharifzadeh, Sahar; Jornada, Felipe H. da; Deslippe, Jack; Yang, Chao; and others

    2015-04-01

    We present a numerical integration scheme for evaluating the convolution of a Green's function with a screened Coulomb potential on the real axis in the GW approximation of the self energy. Our scheme takes the zero broadening limit in Green's function first, replaces the numerator of the integrand with a piecewise polynomial approximation, and performs principal value integration on subintervals analytically. We give the error bound of our numerical integration scheme and show by numerical examples that it is more reliable and accurate than the standard quadrature rules such as the composite trapezoidal rule. We also discuss the benefit of using different self energy expressions to perform the numerical convolution at different frequencies.

  15. Research on the Evolutionary Strategy Based on AIS and Its Application on Numerical Integration

    NASA Astrophysics Data System (ADS)

    Bei, Li

    Based on the features of artificial immune system, a new evolutionary strategy is proposed in order to calculate the numerical integration of functions. This evolutionary strategy includes the mechanisms of swarm searching and constructing the fitness function. Finally, numerical examples are given for verifying the effectiveness of evolutionary strategy. The results show that the performance of evolutionary strategy is satisfactory and more accurate than traditional methods of numerical integration, such as trapezoid formula and Simpson formula.

  16. Recommendations for Achieving Accurate Numerical Simulation of Tip Clearance Flows in Transonic Compressor Rotors

    NASA Technical Reports Server (NTRS)

    VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R,; Hathaway, Michael D.; Okiishi, Theodore H.

    2000-01-01

    The tip clearance flows of transonic compressor rotors are important because they have a significant impact on rotor and stage performance. While numerical simulations of these flows are quite sophisticated. they are seldom verified through rigorous comparisons of numerical and measured data because these kinds of measurements are rare in the detail necessary to be useful in high-speed machines. In this paper we compare measured tip clearance flow details (e.g. trajectory and radial extent) with corresponding data obtained from a numerical simulation. Recommendations for achieving accurate numerical simulation of tip clearance flows are presented based on this comparison. Laser Doppler Velocimeter (LDV) measurements acquired in a transonic compressor rotor, NASA Rotor 35, are used. The tip clearance flow field of this transonic rotor was simulated using a Navier-Stokes turbomachinery solver that incorporates an advanced k-epsilon turbulence model derived for flows that are not in local equilibrium. Comparison between measured and simulated results indicates that simulation accuracy is primarily dependent upon the ability of the numerical code to resolve important details of a wall-bounded shear layer formed by the relative motion between the over-tip leakage flow and the shroud wall. A simple method is presented for determining the strength of this shear layer.

  17. ACCURATE BUILDING INTEGRATED PHOTOVOLTAIC SYSTEM (BIPV) ARCHITECTURAL DESIGN TOOL

    EPA Science Inventory

    One of the leading areas of renewable energy applications for the twenty-first century is building integrated photovoltaics (BIPV). Integrating photovoltaics into building structures allows the costs of the PV system to be partially offset by the solar modules also serving a s...

  18. Final Report for "Accurate Numerical Models of the Secondary Electron Yield from Grazing-incidence Collisions".

    SciTech Connect

    Seth A Veitzer

    2008-10-21

    Effects of stray electrons are a main factor limiting performance of many accelerators. Because heavy-ion fusion (HIF) accelerators will operate in regimes of higher current and with walls much closer to the beam than accelerators operating today, stray electrons might have a large, detrimental effect on the performance of an HIF accelerator. A primary source of stray electrons is electrons generated when halo ions strike the beam pipe walls. There is some research on these types of secondary electrons for the HIF community to draw upon, but this work is missing one crucial ingredient: the effect of grazing incidence. The overall goal of this project was to develop the numerical tools necessary to accurately model the effect of grazing incidence on the behavior of halo ions in a HIF accelerator, and further, to provide accurate models of heavy ion stopping powers with applications to ICF, WDM, and HEDP experiments.

  19. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models

    NASA Astrophysics Data System (ADS)

    Blackman, Jonathan; Field, Scott E.; Galley, Chad R.; Szilágyi, Béla; Scheel, Mark A.; Tiglio, Manuel; Hemberger, Daniel A.

    2015-09-01

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic -2Yℓm waveform modes resolved by the NR code up to ℓ=8 . We compare our surrogate model to effective one body waveforms from 50 M⊙ to 300 M⊙ for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  20. Multigrid time-accurate integration of Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.

    1993-01-01

    Efficient acceleration techniques typical of explicit steady-state solvers are extended to time-accurate calculations. Stability restrictions are greatly reduced by means of a fully implicit time discretization. A four-stage Runge-Kutta scheme with local time stepping, residual smoothing, and multigridding is used instead of traditional time-expensive factorizations. Some applications to natural and forced unsteady viscous flows show the capability of the procedure.

  1. Efficient and accurate numerical methods for the Klein-Gordon-Schroedinger equations

    SciTech Connect

    Bao, Weizhu . E-mail: bao@math.nus.edu.sg; Yang, Li . E-mail: yangli@nus.edu.sg

    2007-08-10

    In this paper, we present efficient, unconditionally stable and accurate numerical methods for approximations of the Klein-Gordon-Schroedinger (KGS) equations with/without damping terms. The key features of our methods are based on: (i) the application of a time-splitting spectral discretization for a Schroedinger-type equation in KGS (ii) the utilization of Fourier pseudospectral discretization for spatial derivatives in the Klein-Gordon equation in KGS (iii) the adoption of solving the ordinary differential equations (ODEs) in phase space analytically under appropriate chosen transmission conditions between different time intervals or applying Crank-Nicolson/leap-frog for linear/nonlinear terms for time derivatives. The numerical methods are either explicit or implicit but can be solved explicitly, unconditionally stable, and of spectral accuracy in space and second-order accuracy in time. Moreover, they are time reversible and time transverse invariant when there is no damping terms in KGS, conserve (or keep the same decay rate of) the wave energy as that in KGS without (or with a linear) damping term, keep the same dynamics of the mean value of the meson field, and give exact results for the plane-wave solution. Extensive numerical tests are presented to confirm the above properties of our numerical methods for KGS. Finally, the methods are applied to study solitary-wave collisions in one dimension (1D), as well as dynamics of a 2D problem in KGS.

  2. Integrated optical circuits for numerical computation

    NASA Technical Reports Server (NTRS)

    Verber, C. M.; Kenan, R. P.

    1983-01-01

    The development of integrated optical circuits (IOC) for numerical-computation applications is reviewed, with a focus on the use of systolic architectures. The basic architecture criteria for optical processors are shown to be the same as those proposed by Kung (1982) for VLSI design, and the advantages of IOCs over bulk techniques are indicated. The operation and fabrication of electrooptic grating structures are outlined, and the application of IOCs of this type to an existing 32-bit, 32-Mbit/sec digital correlator, a proposed matrix multiplier, and a proposed pipeline processor for polynomial evaluation is discussed. The problems arising from the inherent nonlinearity of electrooptic gratings are considered. Diagrams and drawings of the application concepts are provided.

  3. Keyword Search over Data Service Integration for Accurate Results

    NASA Astrophysics Data System (ADS)

    Zemleris, Vidmantas; Kuznetsov, Valentin; Gwadera, Robert

    2014-06-01

    Virtual Data Integration provides a coherent interface for querying heterogeneous data sources (e.g., web services, proprietary systems) with minimum upfront effort. Still, this requires its users to learn a new query language and to get acquainted with data organization which may pose problems even to proficient users. We present a keyword search system, which proposes a ranked list of structured queries along with their explanations. It operates mainly on the metadata, such as the constraints on inputs accepted by services. It was developed as an integral part of the CMS data discovery service, and is currently available as open source.

  4. Accurate Anharmonic IR Spectra from Integrated Cc/dft Approach

    NASA Astrophysics Data System (ADS)

    Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien; Carnimeo, Ivan; Puzzarini, Cristina

    2014-06-01

    The recent implementation of the computation of infrared (IR) intensities beyond the double harmonic approximation [1] paved the route to routine calculations of infrared spectra for a wide set of molecular systems. Contrary to common beliefs, second-order perturbation theory is able to deliver results of high accuracy provided that anharmonic resonances are properly managed [1,2]. It has been already shown for several small closed- and open shell molecular systems that the differences between coupled cluster (CC) and DFT anharmonic wavenumbers are mainly due to the harmonic terms, paving the route to introduce effective yet accurate hybrid CC/DFT schemes [2]. In this work we present that hybrid CC/DFT models can be applied also to the IR intensities leading to the simulation of highly accurate fully anharmonic IR spectra for medium-size molecules, including ones of atmospheric interest, showing in all cases good agreement with experiment even in the spectral ranges where non-fundamental transitions are predominant[3]. [1] J. Bloino and V. Barone, J. Chem. Phys. 136, 124108 (2012) [2] V. Barone, M. Biczysko, J. Bloino, Phys. Chem. Chem. Phys., 16, 1759-1787 (2014) [3] I. Carnimeo, C. Puzzarini, N. Tasinato, P. Stoppa, A. P. Charmet, M. Biczysko, C. Cappelli and V. Barone, J. Chem. Phys., 139, 074310 (2013)

  5. Method for the numerical integration of equations of perturbed satellite motion in problems of space geodesy

    NASA Astrophysics Data System (ADS)

    Plakhov, Iu. V.; Mytsenko, A. V.; Shel'Pov, V. A.

    A numerical integration method is developed that is more accurate than Everhart's (1974) implicit single-sequence approach for integrating orbits. This method can be used to solve problems of space geodesy based on the use of highly precise laser observations.

  6. An improvement in the numerical integration procedure used in the NASA Marshall engineering thermosphere model

    NASA Technical Reports Server (NTRS)

    Hickey, Michael Philip

    1988-01-01

    A proposed replacement scheme for the integration of the barometric and diffusion equations in the NASA Marshall Engineering Thermosphere (MET) model is presented. This proposed integration scheme is based on Gaussian Quadrature. Extensive numerical testing reveals it to be faster, more accurate and more reliable than the present integration scheme (a modified form of Simpson's Rule) used in the MET model. Numerous graphical examples are provided, along with a listing of a modified form of the MET model in which subroutine INTEGRATE (using Simpson's Rule) is replaced by subroutine GAUSS (which uses Gaussian Quadrature). It is recommended that the Gaussian Quadrature integration scheme, as used here, be used in the MET model.

  7. Numerical integration of asymptotic solutions of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Thurston, Gaylen A.

    1989-01-01

    Classical asymptotic analysis of ordinary differential equations derives approximate solutions that are numerically stable. However, the analysis also leads to tedious expansions in powers of the relevant parameter for a particular problem. The expansions are replaced with integrals that can be evaluated by numerical integration. The resulting numerical solutions retain the linear independence that is the main advantage of asymptotic solutions. Examples, including the Falkner-Skan equation from laminar boundary layer theory, illustrate the method of asymptotic analysis with numerical integration.

  8. On the accuracy of numerical integration over the unit sphere applied to full network models

    NASA Astrophysics Data System (ADS)

    Itskov, Mikhail

    2016-05-01

    This paper is motivated by a recent study by Verron (Mecha Mater 89:216-228, 2015) which revealed huge errors of the numerical integration over the unit sphere in application to large strain problems. For the verification of numerical integration schemes we apply here other analytical integrals over the unit sphere which demonstrate much more accurate results. Relative errors of these integrals with respect to corresponding analytical solutions are evaluated also for a full network model of rubber elasticity based on a Padé approximation of the inverse Langevin function as the chain force. According to the results of our study, the numerical integration over the unit sphere can still be considered as a reliable and accurate tool for full network models.

  9. Numerical system utilising a Monte Carlo calculation method for accurate dose assessment in radiation accidents.

    PubMed

    Takahashi, F; Endo, A

    2007-01-01

    A system utilising radiation transport codes has been developed to derive accurate dose distributions in a human body for radiological accidents. A suitable model is quite essential for a numerical analysis. Therefore, two tools were developed to setup a 'problem-dependent' input file, defining a radiation source and an exposed person to simulate the radiation transport in an accident with the Monte Carlo calculation codes-MCNP and MCNPX. Necessary resources are defined by a dialogue method with a generally used personal computer for both the tools. The tools prepare human body and source models described in the input file format of the employed Monte Carlo codes. The tools were validated for dose assessment in comparison with a past criticality accident and a hypothesized exposure. PMID:17510203

  10. A novel numerical technique to obtain an accurate solution to the Thomas-Fermi equation

    NASA Astrophysics Data System (ADS)

    Parand, Kourosh; Yousefi, Hossein; Delkhosh, Mehdi; Ghaderi, Amin

    2016-07-01

    In this paper, a new algorithm based on the fractional order of rational Euler functions (FRE) is introduced to study the Thomas-Fermi (TF) model which is a nonlinear singular ordinary differential equation on a semi-infinite interval. This problem, using the quasilinearization method (QLM), converts to the sequence of linear ordinary differential equations to obtain the solution. For the first time, the rational Euler (RE) and the FRE have been made based on Euler polynomials. In addition, the equation will be solved on a semi-infinite domain without truncating it to a finite domain by taking FRE as basic functions for the collocation method. This method reduces the solution of this problem to the solution of a system of algebraic equations. We demonstrated that the new proposed algorithm is efficient for obtaining the value of y'(0) , y(x) and y'(x) . Comparison with some numerical and analytical solutions shows that the present solution is highly accurate.

  11. Recommendations for accurate numerical blood flow simulations of stented intracranial aneurysms.

    PubMed

    Janiga, Gábor; Berg, Philipp; Beuing, Oliver; Neugebauer, Mathias; Gasteiger, Rocco; Preim, Bernhard; Rose, Georg; Skalej, Martin; Thévenin, Dominique

    2013-06-01

    The number of scientific publications dealing with stented intracranial aneurysms is rapidly increasing. Powerful computational facilities are now available; an accurate computational modeling of hemodynamics in patient-specific configurations is, however, still being sought. Furthermore, there is still no general agreement on the quantities that should be computed and on the most adequate analysis for intervention support. In this article, the accurate representation of patient geometry is first discussed, involving successive improvements. Concerning the second step, the mesh required for the numerical simulation is especially challenging when deploying a stent with very fine wire structures. Third, the description of the fluid properties is a major challenge. Finally, a founded quantitative analysis of the simulation results is obviously needed to support interventional decisions. In the present work, an attempt has been made to review the most important steps for a high-quality computational fluid dynamics computation of virtually stented intracranial aneurysms. In consequence, this leads to concrete recommendations, whereby the obtained results are not discussed for their medical relevance but for the evaluation of their quality. This investigation might hopefully be helpful for further studies considering stent deployment in patient-specific geometries, in particular regarding the generation of the most appropriate computational model. PMID:23729530

  12. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    PubMed

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases). PMID:26430979

  13. PolyPole-1: An accurate numerical algorithm for intra-granular fission gas release

    NASA Astrophysics Data System (ADS)

    Pizzocri, D.; Rabiti, C.; Luzzi, L.; Barani, T.; Van Uffelen, P.; Pastore, G.

    2016-09-01

    The transport of fission gas from within the fuel grains to the grain boundaries (intra-granular fission gas release) is a fundamental controlling mechanism of fission gas release and gaseous swelling in nuclear fuel. Hence, accurate numerical solution of the corresponding mathematical problem needs to be included in fission gas behaviour models used in fuel performance codes. Under the assumption of equilibrium between trapping and resolution, the process can be described mathematically by a single diffusion equation for the gas atom concentration in a grain. In this paper, we propose a new numerical algorithm (PolyPole-1) to efficiently solve the fission gas diffusion equation in time-varying conditions. The PolyPole-1 algorithm is based on the analytic modal solution of the diffusion equation for constant conditions, combined with polynomial corrective terms that embody the information on the deviation from constant conditions. The new algorithm is verified by comparing the results to a finite difference solution over a large number of randomly generated operation histories. Furthermore, comparison to state-of-the-art algorithms used in fuel performance codes demonstrates that the accuracy of PolyPole-1 is superior to other algorithms, with similar computational effort. Finally, the concept of PolyPole-1 may be extended to the solution of the general problem of intra-granular fission gas diffusion during non-equilibrium trapping and resolution, which will be the subject of future work.

  14. Numerical Computation of a Continuous-thrust State Transition Matrix Incorporating Accurate Hardware and Ephemeris Models

    NASA Technical Reports Server (NTRS)

    Ellison, Donald; Conway, Bruce; Englander, Jacob

    2015-01-01

    A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.

  15. Earthquake Rupture Dynamics using Adaptive Mesh Refinement and High-Order Accurate Numerical Methods

    NASA Astrophysics Data System (ADS)

    Kozdon, J. E.; Wilcox, L.

    2013-12-01

    Our goal is to develop scalable and adaptive (spatial and temporal) numerical methods for coupled, multiphysics problems using high-order accurate numerical methods. To do so, we are developing an opensource, parallel library known as bfam (available at http://bfam.in). The first application to be developed on top of bfam is an earthquake rupture dynamics solver using high-order discontinuous Galerkin methods and summation-by-parts finite difference methods. In earthquake rupture dynamics, wave propagation in the Earth's crust is coupled to frictional sliding on fault interfaces. This coupling is two-way, required the simultaneous simulation of both processes. The use of laboratory-measured friction parameters requires near-fault resolution that is 4-5 orders of magnitude higher than that needed to resolve the frequencies of interest in the volume. This, along with earlier simulations using a low-order, finite volume based adaptive mesh refinement framework, suggest that adaptive mesh refinement is ideally suited for this problem. The use of high-order methods is motivated by the high level of resolution required off the fault in earlier the low-order finite volume simulations; we believe this need for resolution is a result of the excessive numerical dissipation of low-order methods. In bfam spatial adaptivity is handled using the p4est library and temporal adaptivity will be accomplished through local time stepping. In this presentation we will present the guiding principles behind the library as well as verification of code against the Southern California Earthquake Center dynamic rupture code validation test problems.

  16. Accurate and efficient Nyström volume integral equation method for the Maxwell equations for multiple 3-D scatterers

    NASA Astrophysics Data System (ADS)

    Chen, Duan; Cai, Wei; Zinser, Brian; Cho, Min Hyung

    2016-09-01

    In this paper, we develop an accurate and efficient Nyström volume integral equation (VIE) method for the Maxwell equations for a large number of 3-D scatterers. The Cauchy Principal Values that arise from the VIE are computed accurately using a finite size exclusion volume together with explicit correction integrals consisting of removable singularities. Also, the hyper-singular integrals are computed using interpolated quadrature formulae with tensor-product quadrature nodes for cubes, spheres and cylinders, that are frequently encountered in the design of meta-materials. The resulting Nyström VIE method is shown to have high accuracy with a small number of collocation points and demonstrates p-convergence for computing the electromagnetic scattering of these objects. Numerical calculations of multiple scatterers of cubic, spherical, and cylindrical shapes validate the efficiency and accuracy of the proposed method.

  17. Numerical Simulation of the 2004 Indian Ocean Tsunami: Accurate Flooding and drying in Banda Aceh

    NASA Astrophysics Data System (ADS)

    Cui, Haiyang; Pietrzak, Julie; Stelling, Guus; Androsov, Alexey; Harig, Sven

    2010-05-01

    The Indian Ocean Tsunami on December 26, 2004 caused one of the largest tsunamis in recent times and led to widespread devastation and loss of life. One of the worst hit regions was Banda Aceh, which is the capital of the Aceh province, located in the northern part of Sumatra, 150km from the source of the earthquake. A German-Indonesian Tsunami Early Warning System (GITEWS) (www.gitews.de) is currently under active development. The work presented here is carried out within the GITEWS framework. One of the aims of this project is the development of accurate models with which to simulate the propagation, flooding and drying, and run-up of a tsunami. In this context, TsunAWI has been developed by the Alfred Wegener Institute; it is an explicit, () finite element model. However, the accurate numerical simulation of flooding and drying requires the conservation of mass and momentum. This is not possible in the current version of TsunAWi. The P1NC - P1element guarantees mass conservation in a global sense, yet as we show here it is important to guarantee mass conservation at the local level, that is within each individual cell. Here an unstructured grid, finite volume ocean model is presented. It is derived from the P1NC - P1 element, and is shown to be mass and momentum conserving. Then a number of simulations are presented, including dam break problems flooding over both a wet and a dry bed. Excellent agreement is found. Then we present simulations for Banda Aceh, and compare the results to on-site survey data, as well as to results from the original TsunAWI code.

  18. Fast and accurate computation of two-dimensional non-separable quadratic-phase integrals.

    PubMed

    Koç, Aykut; Ozaktas, Haldun M; Hesselink, Lambertus

    2010-06-01

    We report a fast and accurate algorithm for numerical computation of two-dimensional non-separable linear canonical transforms (2D-NS-LCTs). Also known as quadratic-phase integrals, this class of integral transforms represents a broad class of optical systems including Fresnel propagation in free space, propagation in graded-index media, passage through thin lenses, and arbitrary concatenations of any number of these, including anamorphic/astigmatic/non-orthogonal cases. The general two-dimensional non-separable case poses several challenges which do not exist in the one-dimensional case and the separable two-dimensional case. The algorithm takes approximately N log N time, where N is the two-dimensional space-bandwidth product of the signal. Our method properly tracks and controls the space-bandwidth products in two dimensions, in order to achieve information theoretically sufficient, but not wastefully redundant, sampling required for the reconstruction of the underlying continuous functions at any stage of the algorithm. Additionally, we provide an alternative definition of general 2D-NS-LCTs that shows its kernel explicitly in terms of its ten parameters, and relate these parameters bidirectionally to conventional ABCD matrix parameters. PMID:20508697

  19. TOPLHA: an accurate and efficient numerical tool for analysis and design of LH antennas

    NASA Astrophysics Data System (ADS)

    Milanesio, D.; Lancellotti, V.; Meneghini, O.; Maggiora, R.; Vecchi, G.; Bilato, R.

    2007-09-01

    Auxiliary ICRF heating systems in tokamaks often involve large complex antennas, made up of several conducting straps hosted in distinct cavities that open towards the plasma. The same holds especially true in the LH regime, wherein the antennas are comprised of arrays of many phased waveguides. Upon observing that the various cavities or waveguides couple to each other only through the EM fields existing over the plasma-facing apertures, we self-consistently formulated the EM problem by a convenient set of multiple coupled integral equations. Subsequent application of the Method of Moments yields a highly sparse algebraic system; therefore formal inversion of the system matrix happens to be not so memory demanding, despite the number of unknowns may be quite large (typically 105 or so). The overall strategy has been implemented in an enhanced version of TOPICA (Torino Polytechnic Ion Cyclotron Antenna) and in a newly developed code named TOPLHA (Torino Polytechnic Lower Hybrid Antenna). Both are simulation and prediction tools for plasma facing antennas that incorporate commercial-grade 3D graphic interfaces along with an accurate description of the plasma. In this work we present the new proposed formulation along with examples of application to real life large LH antenna systems.

  20. TOPICA: an accurate and efficient numerical tool for analysis and design of ICRF antennas

    NASA Astrophysics Data System (ADS)

    Lancellotti, V.; Milanesio, D.; Maggiora, R.; Vecchi, G.; Kyrytsya, V.

    2006-07-01

    The demand for a predictive tool to help in designing ion-cyclotron radio frequency (ICRF) antenna systems for today's fusion experiments has driven the development of codes such as ICANT, RANT3D, and the early development of TOPICA (TOrino Polytechnic Ion Cyclotron Antenna) code. This paper describes the substantive evolution of TOPICA formulation and implementation that presently allow it to handle the actual geometry of ICRF antennas (with curved, solid straps, a general-shape housing, Faraday screen, etc) as well as an accurate plasma description, accounting for density and temperature profiles and finite Larmor radius effects. The antenna is assumed to be housed in a recess-like enclosure. Both goals have been attained by formally separating the problem into two parts: the vacuum region around the antenna and the plasma region inside the toroidal chamber. Field continuity and boundary conditions allow formulating of a set of two coupled integral equations for the unknown equivalent (current) sources; then the equations are reduced to a linear system by a method of moments solution scheme employing 2D finite elements defined over a 3D non-planar surface triangular-cell mesh. In the vacuum region calculations are done in the spatial (configuration) domain, whereas in the plasma region a spectral (wavenumber) representation of fields and currents is adopted, thus permitting a description of the plasma by a surface impedance matrix. Owing to this approach, any plasma model can be used in principle, and at present the FELICE code has been employed. The natural outcomes of TOPICA are the induced currents on the conductors (antenna, housing, etc) and the electric field in front of the plasma, whence the antenna circuit parameters (impedance/scattering matrices), the radiated power and the fields (at locations other than the chamber aperture) are then obtained. An accurate model of the feeding coaxial lines is also included. The theoretical model and its TOPICA

  1. Error Estimates for Numerical Integration Rules

    ERIC Educational Resources Information Center

    Mercer, Peter R.

    2005-01-01

    The starting point for this discussion of error estimates is the fact that integrals that arise in Fourier series have properties that can be used to get improved bounds. This idea is extended to more general situations.

  2. Numerical quadrature methods for integrals of singular periodic functions and their application to singular and weakly singular integral equations

    NASA Technical Reports Server (NTRS)

    Sidi, A.; Israeli, M.

    1986-01-01

    High accuracy numerical quadrature methods for integrals of singular periodic functions are proposed. These methods are based on the appropriate Euler-Maclaurin expansions of trapezoidal rule approximations and their extrapolations. They are used to obtain accurate quadrature methods for the solution of singular and weakly singular Fredholm integral equations. Such periodic equations are used in the solution of planar elliptic boundary value problems, elasticity, potential theory, conformal mapping, boundary element methods, free surface flows, etc. The use of the quadrature methods is demonstrated with numerical examples.

  3. Quantum Calisthenics: Gaussians, The Path Integral and Guided Numerical Approximations

    SciTech Connect

    Weinstein, Marvin; /SLAC

    2009-02-12

    It is apparent to anyone who thinks about it that, to a large degree, the basic concepts of Newtonian physics are quite intuitive, but quantum mechanics is not. My purpose in this talk is to introduce you to a new, much more intuitive way to understand how quantum mechanics works. I begin with an incredibly easy way to derive the time evolution of a Gaussian wave-packet for the case free and harmonic motion without any need to know the eigenstates of the Hamiltonian. This discussion is completely analytic and I will later use it to relate the solution for the behavior of the Gaussian packet to the Feynman path-integral and stationary phase approximation. It will be clear that using the information about the evolution of the Gaussian in this way goes far beyond what the stationary phase approximation tells us. Next, I introduce the concept of the bucket brigade approach to dealing with problems that cannot be handled totally analytically. This approach combines the intuition obtained in the initial discussion, as well as the intuition obtained from the path-integral, with simple numerical tools. My goal is to show that, for any specific process, there is a simple Hilbert space interpretation of the stationary phase approximation. I will then argue that, from the point of view of numerical approximations, the trajectory obtained from my generalization of the stationary phase approximation specifies that subspace of the full Hilbert space that is needed to compute the time evolution of the particular state under the full Hamiltonian. The prescription I will give is totally non-perturbative and we will see, by the grace of Maple animations computed for the case of the anharmonic oscillator Hamiltonian, that this approach allows surprisingly accurate computations to be performed with very little work. I think of this approach to the path-integral as defining what I call a guided numerical approximation scheme. After the discussion of the anharmonic oscillator I will

  4. A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction

    NASA Technical Reports Server (NTRS)

    Bockelie, Michael J.; Eiseman, Peter R.

    1990-01-01

    A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.

  5. Physical and Numerical Model Studies of Cross-flow Turbines Towards Accurate Parameterization in Array Simulations

    NASA Astrophysics Data System (ADS)

    Wosnik, M.; Bachant, P.

    2014-12-01

    Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of

  6. Numerical evaluation of Feynman path integrals

    NASA Astrophysics Data System (ADS)

    Baird, William Hugh

    1999-11-01

    The notion of path integration developed by Feynman, while an incredibly successful method of solving quantum mechanical problems, leads to frequently intractable integrations over an infinite number of paths. Two methods now exist which sidestep this difficulty by defining "densities" of actions which give the relative number of paths found at different values of the action. These densities are sampled by computer generation of paths and the propagators are found to a high degree of accuracy for the case of a particle on the infinite half line and in a finite square well in one dimension. The problem of propagation within a two dimensional radial well is also addressed as the precursor to the problem of a particle in a stadium (quantum billiard).

  7. Numerical integration of ordinary differential equations of various orders

    NASA Technical Reports Server (NTRS)

    Gear, C. W.

    1969-01-01

    Report describes techniques for the numerical integration of differential equations of various orders. Modified multistep predictor-corrector methods for general initial-value problems are discussed and new methods are introduced.

  8. A Numerical Study of Hypersonic Forebody/Inlet Integration Problem

    NASA Technical Reports Server (NTRS)

    Kumar, Ajay

    1991-01-01

    A numerical study of hypersonic forebody/inlet integration problem is presented in the form of the view-graphs. The following topics are covered: physical/chemical modeling; solution procedure; flow conditions; mass flow rate at inlet face; heating and skin friction loads; 3-D forebogy/inlet integration model; and sensitivity studies.

  9. Translation and integration of numerical atomic orbitals in linear molecules.

    PubMed

    Heinäsmäki, Sami

    2014-02-14

    We present algorithms for translation and integration of atomic orbitals for LCAO calculations in linear molecules. The method applies to arbitrary radial functions given on a numerical mesh. The algorithms are based on pseudospectral differentiation matrices in two dimensions and the corresponding two-dimensional Gaussian quadratures. As a result, multicenter overlap and Coulomb integrals can be evaluated effectively. PMID:24527905

  10. Translation and integration of numerical atomic orbitals in linear molecules

    NASA Astrophysics Data System (ADS)

    Heinäsmäki, Sami

    2014-02-01

    We present algorithms for translation and integration of atomic orbitals for LCAO calculations in linear molecules. The method applies to arbitrary radial functions given on a numerical mesh. The algorithms are based on pseudospectral differentiation matrices in two dimensions and the corresponding two-dimensional Gaussian quadratures. As a result, multicenter overlap and Coulomb integrals can be evaluated effectively.

  11. Third-order-accurate numerical methods for efficient, large time-step solutions of mixed linear and nonlinear problems

    SciTech Connect

    Cobb, J.W.

    1995-02-01

    There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.

  12. Numerical solution of optimal control problems using multiple-interval integral Gegenbauer pseudospectral methods

    NASA Astrophysics Data System (ADS)

    Tang, Xiaojun

    2016-04-01

    The main purpose of this work is to provide multiple-interval integral Gegenbauer pseudospectral methods for solving optimal control problems. The latest developed single-interval integral Gauss/(flipped Radau) pseudospectral methods can be viewed as special cases of the proposed methods. We present an exact and efficient approach to compute the mesh pseudospectral integration matrices for the Gegenbauer-Gauss and flipped Gegenbauer-Gauss-Radau points. Numerical results on benchmark optimal control problems confirm the ability of the proposed methods to obtain highly accurate solutions.

  13. AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)

    EPA Science Inventory

    Abstract

    A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...

  14. Numerical integration of discontinuities on arbitrary domains based on moment fitting

    NASA Astrophysics Data System (ADS)

    Joulaian, Meysam; Hubrich, Simeon; Düster, Alexander

    2016-03-01

    Discretization methods based on meshes that do not conform to the geometry of the problem under consideration require special treatment when it comes to the integration of finite elements that are broken by the boundary or internal interfaces. To this end, we propose a numerical approach suitable for integrating broken elements with a low number of integration points. In this method, which is based on the moment fitting approach, an individual quadrature rule is set up for each cut element. The approach requires a B-rep representation of the broken element, which can be either achieved by processing a triangulated surface obtained from a CAD software or by taking advantage of a voxel model resulting from computed tomography. The numerical examples presented in this paper reveal that the proposed method delivers for a wide variety of geometrical situations very accurate results and requires a rather low number of integration points.

  15. Numerical integration of discontinuities on arbitrary domains based on moment fitting

    NASA Astrophysics Data System (ADS)

    Joulaian, Meysam; Hubrich, Simeon; Düster, Alexander

    2016-06-01

    Discretization methods based on meshes that do not conform to the geometry of the problem under consideration require special treatment when it comes to the integration of finite elements that are broken by the boundary or internal interfaces. To this end, we propose a numerical approach suitable for integrating broken elements with a low number of integration points. In this method, which is based on the moment fitting approach, an individual quadrature rule is set up for each cut element. The approach requires a B-rep representation of the broken element, which can be either achieved by processing a triangulated surface obtained from a CAD software or by taking advantage of a voxel model resulting from computed tomography. The numerical examples presented in this paper reveal that the proposed method delivers for a wide variety of geometrical situations very accurate results and requires a rather low number of integration points.

  16. Accurate numerical verification of the instanton method for macroscopic quantum tunneling: Dynamics of phase slips

    SciTech Connect

    Danshita, Ippei; Polkovnikov, Anatoli

    2010-09-01

    We study the quantum dynamics of supercurrents of one-dimensional Bose gases in a ring optical lattice to verify instanton methods applied to coherent macroscopic quantum tunneling (MQT). We directly simulate the real-time quantum dynamics of supercurrents, where a coherent oscillation between two macroscopically distinct current states occurs due to MQT. The tunneling rate extracted from the coherent oscillation is compared with that given by the instanton method. We find that the instanton method is quantitatively accurate when the effective Planck's constant is sufficiently small. We also find phase slips associated with the oscillations.

  17. Numerical Methodology for Coupled Time-Accurate Simulations of Primary and Secondary Flowpaths in Gas Turbines

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.

    2006-01-01

    Detailed information of the flow-fields in the secondary flowpaths and their interaction with the primary flows in gas turbine engines is necessary for successful designs with optimized secondary flow streams. Present work is focused on the development of a simulation methodology for coupled time-accurate solutions of the two flowpaths. The secondary flowstream is treated using SCISEAL, an unstructured adaptive Cartesian grid code developed for secondary flows and seals, while the mainpath flow is solved using TURBO, a density based code with capability of resolving rotor-stator interaction in multi-stage machines. An interface is being tested that links the two codes at the rim seal to allow data exchange between the two codes for parallel, coupled execution. A description of the coupling methodology and the current status of the interface development is presented. Representative steady-state solutions of the secondary flow in the UTRC HP Rig disc cavity are also presented.

  18. Differential-equation-based representation of truncation errors for accurate numerical simulation

    NASA Astrophysics Data System (ADS)

    MacKinnon, Robert J.; Johnson, Richard W.

    1991-09-01

    High-order compact finite difference schemes for 2D convection-diffusion-type differential equations with constant and variable convection coefficients are derived. The governing equations are employed to represent leading truncation terms, including cross-derivatives, making the overall O(h super 4) schemes conform to a 3 x 3 stencil. It is shown that the two-dimensional constant coefficient scheme collapses to the optimal scheme for the one-dimensional case wherein the finite difference equation yields nodally exact results. The two-dimensional schemes are tested against standard model problems, including a Navier-Stokes application. Results show that the two schemes are generally more accurate, on comparable grids, than O(h super 2) centered differencing and commonly used O(h) and O(h super 3) upwinding schemes.

  19. Towards more accurate numerical modeling of impedance based high frequency harmonic vibration

    NASA Astrophysics Data System (ADS)

    Lim, Yee Yan; Kiong Soh, Chee

    2014-03-01

    The application of smart materials in various fields of engineering has recently become increasingly popular. For instance, the high frequency based electromechanical impedance (EMI) technique employing smart piezoelectric materials is found to be versatile in structural health monitoring (SHM). Thus far, considerable efforts have been made to study and improve the technique. Various theoretical models of the EMI technique have been proposed in an attempt to better understand its behavior. So far, the three-dimensional (3D) coupled field finite element (FE) model has proved to be the most accurate. However, large discrepancies between the results of the FE model and experimental tests, especially in terms of the slope and magnitude of the admittance signatures, continue to exist and are yet to be resolved. This paper presents a series of parametric studies using the 3D coupled field finite element method (FEM) on all properties of materials involved in the lead zirconate titanate (PZT) structure interaction of the EMI technique, to investigate their effect on the admittance signatures acquired. FE model updating is then performed by adjusting the parameters to match the experimental results. One of the main reasons for the lower accuracy, especially in terms of magnitude and slope, of previous FE models is the difficulty in determining the damping related coefficients and the stiffness of the bonding layer. In this study, using the hysteretic damping model in place of Rayleigh damping, which is used by most researchers in this field, and updated bonding stiffness, an improved and more accurate FE model is achieved. The results of this paper are expected to be useful for future study of the subject area in terms of research and application, such as modeling, design and optimization.

  20. Efficient and Accurate Explicit Integration Algorithms with Application to Viscoplastic Models

    NASA Technical Reports Server (NTRS)

    Arya, Vinod K.

    1994-01-01

    Several explicit integration algorithms with self-adative time integration strategies are developed and investigated for efficiency and accuracy. These algorithms involve the Runge-Kutta second order, the lower Runge-Kutta method of orders one and two, and the exponential integration method. The algorithms are applied to viscoplastic models put forth by Freed and Verrilli and Bodner and Partom for thermal/mechanical loadings (including tensile, relaxation, and cyclic loadings). The large amount of computations performed showed that, for comparable accuracy, the efficiency of an integration algorithm depends significantly on the type of application (loading). However, in general, for the aforementioned loadings and viscoplastic models, the exponential integration algorithm with the proposed self-adaptive time integration strategy worked more (or comparably) efficiently and accurately than the other integration algorithms. Using this strategy for integrating viscoplastic models may lead to considerable savings in computer time (better efficiency) without adversely affecting the accuracy of the results. This conclusion should encourage the utilization of viscoplastic models in the stress analysis and design of structural components.

  1. TOPLHA: an accurate and efficient numerical tool for analysis and design of LH antennas

    NASA Astrophysics Data System (ADS)

    Milanesio, D.; Meneghini, O.; Maggiora, R.; Guadamuz, S.; Hillairet, J.; Lancellotti, V.; Vecchi, G.

    2012-01-01

    This paper presents a self-consistent, integral-equation approach for the analysis of plasma-facing lower hybrid (LH) launchers; the geometry of the waveguide grill structure can be completely arbitrary, including the non-planar mouth of the grill. This work is based on the theoretical approach and code implementation of the TOPICA code, of which it shares the modular structure and constitutes the extension into the LH range. Code results are validated against the literature results and simulations from similar codes.

  2. Numerically accurate linear response-properties in the configuration-interaction singles (CIS) approximation.

    PubMed

    Kottmann, Jakob S; Höfener, Sebastian; Bischoff, Florian A

    2015-12-21

    In the present work, we report an efficient implementation of configuration interaction singles (CIS) excitation energies and oscillator strengths using the multi-resolution analysis (MRA) framework to address the basis-set convergence of excited state computations. In MRA (ground-state) orbitals, excited states are constructed adaptively guaranteeing an overall precision. Thus not only valence but also, in particular, low-lying Rydberg states can be computed with consistent quality at the basis set limit a priori, or without special treatments, which is demonstrated using a small test set of organic molecules, basis sets, and states. We find that the new implementation of MRA-CIS excitation energy calculations is competitive with conventional LCAO calculations when the basis-set limit of medium-sized molecules is sought, which requires large, diffuse basis sets. This becomes particularly important if accurate calculations of molecular electronic absorption spectra with respect to basis-set incompleteness are required, in which both valence as well as Rydberg excitations can contribute to the molecule's UV/VIS fingerprint. PMID:25913482

  3. An accurate and efficient acoustic eigensolver based on a fast multipole BEM and a contour integral method

    NASA Astrophysics Data System (ADS)

    Zheng, Chang-Jun; Gao, Hai-Feng; Du, Lei; Chen, Hai-Bo; Zhang, Chuanzeng

    2016-01-01

    An accurate numerical solver is developed in this paper for eigenproblems governed by the Helmholtz equation and formulated through the boundary element method. A contour integral method is used to convert the nonlinear eigenproblem into an ordinary eigenproblem, so that eigenvalues can be extracted accurately by solving a set of standard boundary element systems of equations. In order to accelerate the solution procedure, the parameters affecting the accuracy and efficiency of the method are studied and two contour paths are compared. Moreover, a wideband fast multipole method is implemented with a block IDR (s) solver to reduce the overall solution cost of the boundary element systems of equations with multiple right-hand sides. The Burton-Miller formulation is employed to identify the fictitious eigenfrequencies of the interior acoustic problems with multiply connected domains. The actual effect of the Burton-Miller formulation on tackling the fictitious eigenfrequency problem is investigated and the optimal choice of the coupling parameter as α = i / k is confirmed through exterior sphere examples. Furthermore, the numerical eigenvalues obtained by the developed method are compared with the results obtained by the finite element method to show the accuracy and efficiency of the developed method.

  4. Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.

  5. Integrated product definition representation for agile numerical control applications

    SciTech Connect

    Simons, W.R. Jr.; Brooks, S.L.; Kirk, W.J. III; Brown, C.W.

    1994-11-01

    Realization of agile manufacturing capabilities for a virtual enterprise requires the integration of technology, management, and work force into a coordinated, interdependent system. This paper is focused on technology enabling tools for agile manufacturing within a virtual enterprise specifically relating to Numerical Control (N/C) manufacturing activities and product definition requirements for these activities.

  6. Monograph - The Numerical Integration of Ordinary Differential Equations.

    ERIC Educational Resources Information Center

    Hull, T. E.

    The materials presented in this monograph are intended to be included in a course on ordinary differential equations at the upper division level in a college mathematics program. These materials provide an introduction to the numerical integration of ordinary differential equations, and they can be used to supplement a regular text on this…

  7. The use of experimental bending tests to more accurate numerical description of TBC damage process

    NASA Astrophysics Data System (ADS)

    Sadowski, T.; Golewski, P.

    2016-04-01

    Thermal barrier coatings (TBCs) have been extensively used in aircraft engines to protect critical engine parts such as blades and combustion chambers, which are exposed to high temperatures and corrosive environment. The blades of turbine engines are additionally exposed to high mechanical loads. These loads are created by the high rotational speed of the rotor (30 000 rot/min), causing the tensile and bending stresses. Therefore, experimental testing of coated samples is necessary in order to determine strength properties of TBCs. Beam samples with dimensions 50×10×2 mm were used in those studies. The TBC system consisted of 150 μm thick bond coat (NiCoCrAlY) and 300 μm thick top coat (YSZ) made by APS (air plasma spray) process. Samples were tested by three-point bending test with various loads. After bending tests, the samples were subjected to microscopic observation to determine the quantity of cracks and their depth. The above mentioned results were used to build numerical model and calibrate material data in Abaqus program. Brittle cracking damage model was applied for the TBC layer, which allows to remove elements after reaching criterion. Surface based cohesive behavior was used to model the delamination which may occur at the boundary between bond coat and top coat.

  8. Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration

    SciTech Connect

    Masalma, Yahya; Jiao, Yu

    2010-10-01

    We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.

  9. Numerical integration of ordinary differential equations on manifolds

    NASA Astrophysics Data System (ADS)

    Crouch, P. E.; Grossman, R.

    1993-12-01

    This paper is concerned with the problem of developing numerical integration algorithms for differential equations that, when viewed as equations in some Euclidean space, naturally evolve on some embedded submanifold. It is desired to construct algorithms whose iterates also evolve on the same manifold. These algorithms can therefore be viewed as integrating ordinary differential equations on manifolds. The basic method “decouples” the computation of flows on the submanifold from the numerical integration process. It is shown that two classes of single-step and multistep algorithms can be posed and analyzed theoretically, using the concept of “freezing” the coefficients of differential operators obtained from the defining vector field. Explicit third-order algorithms are derived, with additional equations augmenting those of their classical counterparts, obtained from “obstructions” defined by nonvanishing Lie brackets.

  10. Wakeful rest promotes the integration of spatial memories into accurate cognitive maps.

    PubMed

    Craig, Michael; Dewar, Michaela; Harris, Mathew A; Della Sala, Sergio; Wolbers, Thomas

    2016-02-01

    Flexible spatial navigation, e.g. the ability to take novel shortcuts, is contingent upon accurate mental representations of environments-cognitive maps. These cognitive maps critically depend on hippocampal place cells. In rodents, place cells replay recently travelled routes, especially during periods of behavioural inactivity (sleep/wakeful rest). This neural replay is hypothesised to promote not only the consolidation of specific experiences, but also their wider integration, e.g. into accurate cognitive maps. In humans, rest promotes the consolidation of specific experiences, but the effect of rest on the wider integration of memories remained unknown. In the present study, we examined the hypothesis that cognitive map formation is supported by rest-related integration of new spatial memories. We predicted that if wakeful rest supports cognitive map formation, then rest should enhance knowledge of overarching spatial relations that were never experienced directly during recent navigation. Forty young participants learned a route through a virtual environment before either resting wakefully or engaging in an unrelated perceptual task for 10 min. Participants in the wakeful rest condition performed more accurately in a delayed cognitive map test, requiring the pointing to landmarks from a range of locations. Importantly, the benefit of rest could not be explained by active rehearsal, but can be attributed to the promotion of consolidation-related activity. These findings (i) resonate with the demonstration of hippocampal replay in rodents, and (ii) provide the first evidence that wakeful rest can improve the integration of new spatial memories in humans, a function that has, hitherto, been associated with sleep. PMID:26235141

  11. Ensemble-type numerical uncertainty information from single model integrations

    SciTech Connect

    Rauser, Florian Marotzke, Jochem; Korn, Peter

    2015-07-01

    We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of the influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.

  12. Novel electromagnetic surface integral equations for highly accurate computations of dielectric bodies with arbitrarily low contrasts

    SciTech Connect

    Erguel, Ozguer; Guerel, Levent

    2008-12-01

    We present a novel stabilization procedure for accurate surface formulations of electromagnetic scattering problems involving three-dimensional dielectric objects with arbitrarily low contrasts. Conventional surface integral equations provide inaccurate results for the scattered fields when the contrast of the object is low, i.e., when the electromagnetic material parameters of the scatterer and the host medium are close to each other. We propose a stabilization procedure involving the extraction of nonradiating currents and rearrangement of the right-hand side of the equations using fictitious incident fields. Then, only the radiating currents are solved to calculate the scattered fields accurately. This technique can easily be applied to the existing implementations of conventional formulations, it requires negligible extra computational cost, and it is also appropriate for the solution of large problems with the multilevel fast multipole algorithm. We show that the stabilization leads to robust formulations that are valid even for the solutions of extremely low-contrast objects.

  13. Stability of numerical integration techniques for transient rotor dynamics

    NASA Technical Reports Server (NTRS)

    Kascak, A. F.

    1977-01-01

    A finite element model of a rotor bearing system was analyzed to determine the stability limits of the forward, backward, and centered Euler; Runge-Kutta; Milne; and Adams numerical integration techniques. The analysis concludes that the highest frequency mode determines the maximum time step for a stable solution. Thus, the number of mass elements should be minimized. Increasing the damping can sometimes cause numerical instability. For a uniform shaft, with 10 mass elements, operating at approximately the first critical speed, the maximum time step for the Runge-Kutta, Milne, and Adams methods is that which corresponds to approximately 1 degree of shaft movement. This is independent of rotor dimensions.

  14. How Accurate are the Extremely Small P-values Used in Genomic Research: An Evaluation of Numerical Libraries

    PubMed Central

    Bangalore, Sai Santosh; Wang, Jelai; Allison, David B.

    2009-01-01

    In the fields of genomics and high dimensional biology (HDB), massive multiple testing prompts the use of extremely small significance levels. Because tail areas of statistical distributions are needed for hypothesis testing, the accuracy of these areas is important to confidently make scientific judgments. Previous work on accuracy was primarily focused on evaluating professionally written statistical software, like SAS, on the Statistical Reference Datasets (StRD) provided by National Institute of Standards and Technology (NIST) and on the accuracy of tail areas in statistical distributions. The goal of this paper is to provide guidance to investigators, who are developing their own custom scientific software built upon numerical libraries written by others. In specific, we evaluate the accuracy of small tail areas from cumulative distribution functions (CDF) of the Chi-square and t-distribution by comparing several open-source, free, or commercially licensed numerical libraries in Java, C, and R to widely accepted standards of comparison like ELV and DCDFLIB. In our evaluation, the C libraries and R functions are consistently accurate up to six significant digits. Amongst the evaluated Java libraries, Colt is most accurate. These languages and libraries are popular choices among programmers developing scientific software, so the results herein can be useful to programmers in choosing libraries for CDF accuracy. PMID:20161126

  15. Path Integrals and Exotic Options:. Methods and Numerical Results

    NASA Astrophysics Data System (ADS)

    Bormetti, G.; Montagna, G.; Moreni, N.; Nicrosini, O.

    2005-09-01

    In the framework of Black-Scholes-Merton model of financial derivatives, a path integral approach to option pricing is presented. A general formula to price path dependent options on multidimensional and correlated underlying assets is obtained and implemented by means of various flexible and efficient algorithms. As an example, we detail the case of Asian call options. The numerical results are compared with those obtained with other procedures used in quantitative finance and found to be in good agreement. In particular, when pricing at the money (ATM) and out of the money (OTM) options, path integral exhibits competitive performances.

  16. An accurate spline polynomial cubature formula for double integration with logarithmic singularity

    NASA Astrophysics Data System (ADS)

    Bichi, Sirajo Lawan; Eshkuvatov, Z. K.; Long, N. M. A. Nik; Bello, M. Y.

    2016-06-01

    The paper studied the integration of logarithmic singularity problem J (y ¯)= ∬ ∇ζ (y ¯)l o g |y ¯-y¯0*|d A , where y ¯=(α ,β ), y¯0=(α0,β0) the domain ∇ is rectangle ∇ = [r1, r2] × [r3, r4], the arbitrary point y ¯∈∇ and the fixed point y¯0∈∇. The given density function ζ(y ¯), is smooth on the rectangular domain ∇ and is in the functions class C2,τ (∇). Cubature formula (CF) for double integration with logarithmic singularities (LS) on a rectangle ∇ is constructed by applying type (0, 2) modified spline function DΓ(P). The results obtained by testing the density functions ζ(y ¯) as linear and absolute value functions shows that the constructed CF is highly accurate.

  17. Accurate and automatic extrinsic calibration method for blade measurement system integrated by different optical sensors

    NASA Astrophysics Data System (ADS)

    He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu

    2014-11-01

    Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.

  18. An accurate numerical solution to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in rivers

    NASA Astrophysics Data System (ADS)

    Stecca, Guglielmo; Siviglia, Annunziato; Blom, Astrid

    2016-07-01

    We present an accurate numerical approximation to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in one space dimension. Our solution procedure originates from the fully-unsteady matrix-vector formulation developed in [54]. The principal part of the problem is solved by an explicit Finite Volume upwind method of the path-conservative type, by which all the variables are updated simultaneously in a coupled fashion. The solution to the principal part is embedded into a splitting procedure for the treatment of frictional source terms. The numerical scheme is extended to second-order accuracy and includes a bookkeeping procedure for handling the evolution of size stratification in the substrate. We develop a concept of balancedness for the vertical mass flux between the substrate and active layer under bed degradation, which prevents the occurrence of non-physical oscillations in the grainsize distribution of the substrate. We suitably modify the numerical scheme to respect this principle. We finally verify the accuracy in our solution to the equations, and its ability to reproduce one-dimensional morphodynamics due to streamwise and vertical sorting, using three test cases. In detail, (i) we empirically assess the balancedness of vertical mass fluxes under degradation; (ii) we study the convergence to the analytical linearised solution for the propagation of infinitesimal-amplitude waves [54], which is here employed for the first time to assess a mixed-sediment model; (iii) we reproduce Ribberink's E8-E9 flume experiment [46].

  19. Simple and Efficient Numerical Evaluation of Near-Hypersingular Integrals

    NASA Technical Reports Server (NTRS)

    Fink, Patrick W.; Wilton, Donald R.; Khayat, Michael A.

    2007-01-01

    Recently, significant progress has been made in the handling of singular and nearly-singular potential integrals that commonly arise in the Boundary Element Method (BEM). To facilitate object-oriented programming and handling of higher order basis functions, cancellation techniques are favored over techniques involving singularity subtraction. However, gradients of the Newton-type potentials, which produce hypersingular kernels, are also frequently required in BEM formulations. As is the case with the potentials, treatment of the near-hypersingular integrals has proven more challenging than treating the limiting case in which the observation point approaches the surface. Historically, numerical evaluation of these near-hypersingularities has often involved a two-step procedure: a singularity subtraction to reduce the order of the singularity, followed by a boundary contour integral evaluation of the extracted part. Since this evaluation necessarily links basis function, Green s function, and the integration domain (element shape), the approach ill fits object-oriented programming concepts. Thus, there is a need for cancellation-type techniques for efficient numerical evaluation of the gradient of the potential. Progress in the development of efficient cancellation-type procedures for the gradient potentials was recently presented. To the extent possible, a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. However, since the gradient kernel involves singularities of different orders, we also require that the transformation leaves remaining terms that are analytic. The terms "normal" and "tangential" are used herein with reference to the source element. Also, since computational formulations often involve the numerical evaluation of both potentials and their gradients, it is highly desirable that a single integration procedure efficiently handles both.

  20. A two-dimensional depth-integrated non-hydrostatic numerical model for nearshore wave propagation

    NASA Astrophysics Data System (ADS)

    Lu, Xinhua; Dong, Bingjiang; Mao, Bing; Zhang, Xiaofeng

    2015-12-01

    In this study, we develop a shallow-water depth-integrated non-hydrostatic numerical model (SNH model) using a hybrid finite-volume and finite-difference method. Numerical discretization is performed using the non-incremental pressure-correction method on a collocated grid. We demonstrate that an extension can easily be made from an existing finite-volume method and collocated-grid based hydrostatic shallow-water equations (SWE) model to a non-hydrostatic model. A series of benchmark tests are used to validate the proposed numerical model. Our results demonstrate that the proposed model is robust and well-balanced, and it captures the wet-dry fronts accurately. A comparison between the SNH and SWE models indicates the importance of considering the wave dispersion effect in simulations when the wave amplitude to water depth ratio is large.

  1. Integrating Numerical Groundwater Modeling Results With Geographic Information Systems

    NASA Astrophysics Data System (ADS)

    Witkowski, M. S.; Robinson, B. A.; Linger, S. P.

    2001-12-01

    Many different types of data are used to create numerical models of flow and transport of groundwater in the vadose zone. Results from water balance studies, infiltration models, hydrologic properties, and digital elevation models (DEMs) are examples of such data. Because input data comes in a variety of formats, for consistency the data need to be assembled in a coherent fashion on a single platform. Through the use of a geographic information system (GIS), all data sources can effectively be integrated on one platform to store, retrieve, query, and display data. In our vadoze zone modeling studies in support of Los Alamos National Laboratory's Environmental Restoration Project, we employ a GIS comprised of a Raid storage device, an Oracle database, ESRI's spatial database engine (SDE), ArcView GIS, and custom GIS tools for three-dimensional (3D) analysis. We store traditional GIS data, such as, contours, historical building footprints, and study area locations, as points, lines, and polygons with attributes. Numerical flow and transport model results from the Finite Element Heat and Mass Transfer Code (FEHM) are stored as points with attributes, such as fluid saturation, or pressure, or contaminant concentration at a given location. We overlay traditional types of GIS data with numerical model results, thereby allowing us to better build conceptual models and perform spatial analyses. We have also developed specialized analysis tools to assist in the data and model analysis process. This approach provides an integrated framework for performing tasks such as comparing the model to data and understanding the relationship of model predictions to existing contaminant source locations and water supply wells. Our process of integrating GIS and numerical modeling results allows us to answer a wide variety of questions about our conceptual model design: - Which set of locations should be identified as contaminant sources based on known historical building operations

  2. Accurate integral equation theory for the central force model of liquid water and ionic solutions

    NASA Astrophysics Data System (ADS)

    Ichiye, Toshiko; Haymet, A. D. J.

    1988-10-01

    The atom-atom pair correlation functions and thermodynamics of the central force model of water, introduced by Lemberg, Stillinger, and Rahman, have been calculated accurately by an integral equation method which incorporates two new developments. First, a rapid new scheme has been used to solve the Ornstein-Zernike equation. This scheme combines the renormalization methods of Allnatt, and Rossky and Friedman with an extension of the trigonometric basis-set solution of Labik and co-workers. Second, by adding approximate ``bridge'' functions to the hypernetted-chain (HNC) integral equation, we have obtained predictions for liquid water in which the hydrogen bond length and number are in good agreement with ``exact'' computer simulations of the same model force laws. In addition, for dilute ionic solutions, the ion-oxygen and ion-hydrogen coordination numbers display both the physically correct stoichiometry and good agreement with earlier simulations. These results represent a measurable improvement over both a previous HNC solution of the central force model and the ex-RISM integral equation solutions for the TIPS and other rigid molecule models of water.

  3. Development of highly accurate approximate scheme for computing the charge transfer integral.

    PubMed

    Pershin, Anton; Szalay, Péter G

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the "exact" scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the "exact" calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature. PMID:26298117

  4. Development of highly accurate approximate scheme for computing the charge transfer integral

    SciTech Connect

    Pershin, Anton; Szalay, Péter G.

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the “exact” scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the “exact” calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.

  5. Comparison of integrated numerical experiments with accelerator and FEL experiments

    SciTech Connect

    Thode, L.E.; Carlsten, B.E.; Chan, K.C.D.; Cooper, R.K.; Elliott, J.C.; Gitomer, S.J.; Goldstein, J.C.; Jones, M.E.; McVey, B.D.; Schmitt, M.J.; Takeda, H.; Tokar, R.L.; Wang, T.S.; Young, L.M.

    1991-01-01

    Even at the conceptual level the strong coupling between the laser subsystem elements, such as the accelerator, wiggler, optics, and control, greatly complicates the understanding and design of an FEL. Given the requirements for a high-performance FEL, the coupling between the laser subsystems must be included in the design approach. To address the subsystem coupling the concept of an integrated numerical experiment (INEX) has been implemented. Unique features of the INEX approach are consistency and numerical equivalence of experimental diagnostic. The equivalent numerical diagnostics mitigates the major problem of misinterpretation that often occurs when theoretical and experimental data are compared. A complete INEX model has been applied to the 10{mu}m high-extraction-efficiency experiment at Los Alamos and the 0.6-{mu}m Burst Mode experiment at Boeing Aerospace. In addition, various subsets of the INEX model have been compared with a number of other experiments. Overall, the agreement between INEX and the experiments is very good. With the INEX approach, it now appears possible to design high-performance FELS for numerous applications. The first full-scale test of the INEX approach is the Los Alamos HIBAF experiment. The INEX concept, implementation, and validation with experiments are discussed. 28 refs., 13 figs., 1 tab.

  6. An Improved Numerical Integration Method for Springback Predictions

    NASA Astrophysics Data System (ADS)

    Ibrahim, R.; Smith, L. M.; Golovashchenko, Sergey F.

    2011-08-01

    In this investigation, the focus is on the springback of steel sheets in V-die air bending. A full replication to a numerical integration algorithm presented rigorously in [1] to predict the springback in air bending was performed and confirmed successfully. Algorithm alteration and extensions were proposed here. The altered approach used in solving the moment equation numerically resulted in springback values much closer to the trend presented by the experimental data, Although investigation here extended to use a more realistic work-hardening model, the differences in the springback values obtained by both hardening models were almost negligible. The algorithm was extended to be applied on thin sheets down to 0.8 mm. Results show that this extension is possible as verified by FEA and other published experiments on TRIP steel sheets.

  7. INEX (integrated numerical experiment) simulations of the Boeing FEL system

    SciTech Connect

    Tokar, R.L.; Young, L.M.; Lumpkin, A.H.; McVey, B.D.; Thode, L.E.; Bender, S.C.; Chan, K.C.D. ); Yeremian, A.D.; Dowell, D.H.; Lowrey, A.R. )

    1989-01-01

    The INEX (integrated numerical experiment) numerical model is applied to the 0.6 {mu}m FEL oscillator at Boeing Aerospace and Electronics Company in Seattle, WA. This system consists of a 110 MeV L-band rf linac, a beam transport line from the accelerator to the entrance of the wiggler, the 5.0 meter THUNDER variable taper wiggler, and a near concentric two mirror optical oscillator. Many aspects of the model for the electron beam accelerator and transport line agree with experimental measurements. Predictions for lasing performance are compared with data obtained in May and June 1989 using a mild tapered wiggler. We obtain good agreement with the achieved extraction efficiency, while 1D pulse simulations reproduce the observed sideband instability. 15 refs., 11 figs.

  8. Singularity Preserving Numerical Methods for Boundary Integral Equations

    NASA Technical Reports Server (NTRS)

    Kaneko, Hideaki (Principal Investigator)

    1996-01-01

    In the past twelve months (May 8, 1995 - May 8, 1996), under the cooperative agreement with Division of Multidisciplinary Optimization at NASA Langley, we have accomplished the following five projects: a note on the finite element method with singular basis functions; numerical quadrature for weakly singular integrals; superconvergence of degenerate kernel method; superconvergence of the iterated collocation method for Hammersteion equations; and singularity preserving Galerkin method for Hammerstein equations with logarithmic kernel. This final report consists of five papers describing these projects. Each project is preceeded by a brief abstract.

  9. Numerical integration techniques for curved-element discretizations of molecule-solvent interfaces.

    PubMed

    Bardhan, Jaydeep P; Altman, Michael D; Willis, David J; Lippow, Shaun M; Tidor, Bruce; White, Jacob K

    2007-07-01

    Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, here methods were developed to model several important surface formulations using exact surface discretizations. Following and refining Zauhar's work [J. Comput.-Aided Mol. Des. 9, 149 (1995)], two classes of curved elements were defined that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. Numerical integration techniques are presented that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, a set of calculations are presented that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planar-triangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute-solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved

  10. Trigonometrically fitted two step hybrid method for the numerical integration of second order IVPs

    NASA Astrophysics Data System (ADS)

    Monovasilis, Th.; Kalogiratou, Z.; Simos, T. E.

    2016-06-01

    In this work we consider the numerical integration of second order ODEs where the first derivative is missing. We construct trigonometrically fitted two step hybrid methods. We apply the new methods on the numerical integration of several test problems.

  11. Accurate quantification of diffusion and binding kinetics of non-integral membrane proteins by FRAP.

    PubMed

    Berkovich, Ronen; Wolfenson, Haguy; Eisenberg, Sharon; Ehrlich, Marcelo; Weiss, Matthias; Klafter, Joseph; Henis, Yoav I; Urbakh, Michael

    2011-11-01

    Non-integral membrane proteins frequently act as transduction hubs in vital signaling pathways initiated at the plasma membrane (PM). Their biological activity depends on dynamic interactions with the PM, which are governed by their lateral and cytoplasmic diffusion and membrane binding/unbinding kinetics. Accurate quantification of the multiple kinetic parameters characterizing their membrane interaction dynamics has been challenging. Despite a fair number of approximate fitting functions for analyzing fluorescence recovery after photobleaching (FRAP) data, no approach was able to cope with the full diffusion-exchange problem. Here, we present an exact solution and matlab fitting programs for FRAP with a stationary Gaussian laser beam, allowing simultaneous determination of the membrane (un)binding rates and the diffusion coefficients. To reduce the number of fitting parameters, the cytoplasmic diffusion coefficient is determined separately. Notably, our equations include the dependence of the exchange kinetics on the distribution of the measured protein between the PM and the cytoplasm, enabling the derivation of both k(on) and k(off) without prior assumptions. After validating the fitting function by computer simulations, we confirm the applicability of our approach to live-cell data by monitoring the dynamics of GFP-N-Ras mutants under conditions with different contributions of lateral diffusion and exchange to the FRAP kinetics. PMID:21810156

  12. Comparison of four stable numerical methods for Abel's integral equation

    NASA Technical Reports Server (NTRS)

    Murio, Diego A.; Mejia, Carlos E.

    1991-01-01

    The 3-D image reconstruction from cone-beam projections in computerized tomography leads naturally, in the case of radial symmetry, to the study of Abel-type integral equations. If the experimental information is obtained from measured data, on a discrete set of points, special methods are needed in order to restore continuity with respect to the data. A new combined Regularized-Adjoint-Conjugate Gradient algorithm, together with two different implementations of the Mollification Method (one based on a data filtering technique and the other on the mollification of the kernal function) and a regularization by truncation method (initially proposed for 2-D ray sample schemes and more recently extended to 3-D cone-beam image reconstruction) are extensively tested and compared for accuracy and numerical stability as functions of the level of noise in the data.

  13. Quantitative evaluation of numerical integration schemes for Lagrangian particle dispersion models

    NASA Astrophysics Data System (ADS)

    Ramli, Huda Mohd.; Esler, J. Gavin

    2016-07-01

    A rigorous methodology for the evaluation of integration schemes for Lagrangian particle dispersion models (LPDMs) is presented. A series of one-dimensional test problems are introduced, for which the Fokker-Planck equation is solved numerically using a finite-difference discretisation in physical space and a Hermite function expansion in velocity space. Numerical convergence errors in the Fokker-Planck equation solutions are shown to be much less than the statistical error associated with a practical-sized ensemble (N = 106) of LPDM solutions; hence, the former can be used to validate the latter. The test problems are then used to evaluate commonly used LPDM integration schemes. The results allow for optimal time-step selection for each scheme, given a required level of accuracy. The following recommendations are made for use in operational models. First, if computational constraints require the use of moderate to long time steps, it is more accurate to solve the random displacement model approximation to the LPDM rather than use existing schemes designed for long time steps. Second, useful gains in numerical accuracy can be obtained, at moderate additional computational cost, by using the relatively simple "small-noise" scheme of Honeycutt.

  14. A fast numerical solution of scattering by a cylinder: Spectral method for the boundary integral equations

    NASA Technical Reports Server (NTRS)

    Hu, Fang Q.

    1994-01-01

    It is known that the exact analytic solutions of wave scattering by a circular cylinder, when they exist, are not in a closed form but in infinite series which converges slowly for high frequency waves. In this paper, we present a fast number solution for the scattering problem in which the boundary integral equations, reformulated from the Helmholtz equation, are solved using a Fourier spectral method. It is shown that the special geometry considered here allows the implementation of the spectral method to be simple and very efficient. The present method differs from previous approaches in that the singularities of the integral kernels are removed and dealt with accurately. The proposed method preserves the spectral accuracy and is shown to have an exponential rate of convergence. Aspects of efficient implementation using FFT are discussed. Moreover, the boundary integral equations of combined single and double-layer representation are used in the present paper. This ensures the uniqueness of the numerical solution for the scattering problem at all frequencies. Although a strongly singular kernel is encountered for the Neumann boundary conditions, we show that the hypersingularity can be handled easily in the spectral method. Numerical examples that demonstrate the validity of the method are also presented.

  15. New Techniques for Simulation of Ion Implantation by Numerical Integration of Boltzmann Transport Equation

    NASA Astrophysics Data System (ADS)

    Wang, Shyh-Wei; Guo, Shuang-Fa

    1998-01-01

    New techniques for more accurate and efficient simulation of ion implantations by a stepwise numerical integration of the Boltzmann transport equation (BTE) have been developed in this work. Instead of using uniform energy grid, a non-uniform grid is employed to construct the momentum distribution matrix. A more accurate simulation result is obtained for heavy ions implanted into silicon. In the same time, rather than utilizing the conventional Lindhard, Nielsen and Schoitt (LNS) approximation, an exact evaluation of the integrals involving the nuclear differential scattering cross-section (dσn=2πp dp) is proposed. The impact parameter p as a function of ion energy E and scattering angle φ is obtained by solving the magic formula iteratively and an interpolation techniques is devised during the simulation process. The simulation time using exact evaluation is about 3.5 times faster than that using the Littmark and Ziegler (LZ) spline fitted cross-section function for phosphorus implantation into silicon.

  16. Carbon Dioxide Dispersion in the Combustion Integrated Rack Simulated Numerically

    NASA Technical Reports Server (NTRS)

    Wu, Ming-Shin; Ruff, Gary A.

    2004-01-01

    When discharged into an International Space Station (ISS) payload rack, a carbon dioxide (CO2) portable fire extinguisher (PFE) must extinguish a fire by decreasing the oxygen in the rack by 50 percent within 60 sec. The length of time needed for this oxygen reduction throughout the rack and the length of time that the CO2 concentration remains high enough to prevent the fire from reigniting is important when determining the effectiveness of the response and postfire procedures. Furthermore, in the absence of gravity, the local flow velocity can make the difference between a fire that spreads rapidly and one that self-extinguishes after ignition. A numerical simulation of the discharge of CO2 from PFE into the Combustion Integrated Rack (CIR) in microgravity was performed to obtain the local velocity and CO2 concentration. The complicated flow field around the PFE nozzle exits was modeled by sources of equivalent mass and momentum flux at a location downstream of the nozzle. The time for the concentration of CO2 to reach a level that would extinguish a fire anywhere in the rack was determined using the Fire Dynamics Simulator (FDS), a computational fluid dynamics code developed by the National Institute of Standards and Technology specifically to evaluate the development of a fire and smoke transport. The simulation shows that CO2, as well as any smoke and combustion gases produced by a fire, would be discharged into the ISS cabin through the resource utility panel at the bottom of the rack. These simulations will be validated by comparing the results with velocity and CO2 concentration measurements obtained during the fire suppression system verification tests conducted on the CIR in March 2003. Once these numerical simulations are validated, portions of the ISS labs and living areas will be modeled to determine the local flow conditions before, during, and after a fire event. These simulations can yield specific information about how long it takes for smoke and

  17. Black shale weathering: An integrated field and numerical modeling study

    NASA Astrophysics Data System (ADS)

    Bolton, E. W.; Wildman, R. A., Jr.; Berner, R. A.; Eckert, J. O., Jr.; Petsch, S. T.; Mok, U.; Evans, B.

    2003-04-01

    We present an integrated study of black shale weathering in a near surface environment. Implications of this study contribute to our understanding of organic matter oxidation in uplifted sediments, along with erosion and reburial of ancient unoxidized organic matter, as major controls on atmospheric oxygen levels over geologic time. The field study used to launch the modeling effort is based on core samples from central-eastern Kentucky near Clay City (Late Devonian New Albany/Ohio Shale), where the strata are essentially horizontal. Samples from various depth intervals (up to 12 m depth) were analyzed for texture (SEM images), porosity fraction (0.02 to 0.1), and horizontal and vertical permeability (water and air permeabilities differ due to the fine-grained nature of the sediments, but are on the order of 0.01 to 1. millidarcies, respectively). Chemical analyses were also performed for per cent C, N, S, and basic mineralogy was determined (clays, quartz, pyrite, in addition to organic matter). The samples contained from 2 to 15 per cent ancient (non-modern soil) organic matter. These results were used in the creation of a numerical model for kinetically controlled oxidation of the organic matter within the shale (based on kinetics from Chang and Berner, 1999). The one-dimensional model includes erosion, oxygen diffusion in the partially saturated vadose zone as well as water percolation and solute transport. This study extends the studies of Petsch (2000) and the weathering component of Lasaga and Ohmoto (2002) to include more reactions (e.g., pyrite oxidation to sulfuric acid and weathering of silicates due to low pH) and to resolve the near-surface boundary layer. The model provides a convenient means of exploring the influence of variable rates of erosion, oxygen level, rainfall, as well as physical and chemical characteristics of the shale on organic matter oxidation.

  18. Numerical and experimental studies of coupling-induced phase shift in resonator and interferometric integrated optics devices.

    PubMed

    Tobing, L Y M; Tjahjana, L; Darmawan, S; Zhang, D H

    2012-02-27

    Coupling induced effects are higher order effects inherent in waveguide evanescent coupling that are known to spectrally distort optical performances of integrated optics devices formed by coupled resonators. We present both numerical and experimental studies of coupling-induced phase shift in various basic integrated optics devices. Rigorous finite difference time domain simulations and systematic experimental characterizations of different basic structures were conducted for more accurate parameter extraction, where it can be observed that coupling induced wave vector may change sign at the increasing gap separation. The devices characterized in this work were fabricated by CMOS-process 193 nm Deep UV (DUV) lithography in silicon-on-insulator (SOI) technology. PMID:22418385

  19. Data Integrity: Why Aren't the Data Accurate? AIR 1989 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Gose, Frank J.

    The accuracy and reliability aspects of data integrity are discussed, with an emphasis on the need for consistency in responsibility and authority. A variety of ways in which data integrity can be compromised are discussed. The following sources of data corruption are described, and the ease or difficulty of identification and suggested actions…

  20. Integrating Numerical Computation into the Modeling Instruction Curriculum

    ERIC Educational Resources Information Center

    Caballero, Marcos D.; Burk, John B.; Aiken, John M.; Thoms, Brian D.; Douglas, Scott S.; Scanlon, Erin M.; Schatz, Michael F.

    2014-01-01

    Numerical computation (the use of a computer to solve, simulate, or visualize a physical problem) has fundamentally changed the way scientific research is done. Systems that are too difficult to solve in closed form are probed using computation. Experiments that are impossible to perform in the laboratory are studied numerically. Consequently, in…

  1. Applying integrals of motion to the numerical solution of differential equations

    NASA Technical Reports Server (NTRS)

    Jezewski, D. J.

    1979-01-01

    A method is developed for using the integrals of systems of nonlinear, ordinary differential equations in a numerical integration process to control the local errors in these integrals and reduce the global errors of the solution. The method is general and can be applied to either scaler or vector integrals. A number of example problems, with accompanying numerical results, are used to verify the analysis and support the conjecture of global error reduction.

  2. Numerical Integration with GeoGebra in High School

    ERIC Educational Resources Information Center

    Herceg, Dorde; Herceg, Dragoslav

    2010-01-01

    The concept of definite integral is almost always introduced as the Riemann integral, which is defined in terms of the Riemann sum, and its geometric interpretation. This definition is hard to understand for high school students. With the aid of mathematical software for visualisation and computation of approximate integrals, the notion of…

  3. A simple and accurate algorithm for path integral molecular dynamics with the Langevin thermostat.

    PubMed

    Liu, Jian; Li, Dezhang; Liu, Xinzijian

    2016-07-14

    We introduce a novel simple algorithm for thermostatting path integral molecular dynamics (PIMD) with the Langevin equation. The staging transformation of path integral beads is employed for demonstration. The optimum friction coefficients for the staging modes in the free particle limit are used for all systems. In comparison to the path integral Langevin equation thermostat, the new algorithm exploits a different order of splitting for the phase space propagator associated to the Langevin equation. While the error analysis is made for both algorithms, they are also employed in the PIMD simulations of three realistic systems (the H2O molecule, liquid para-hydrogen, and liquid water) for comparison. It is shown that the new thermostat increases the time interval of PIMD by a factor of 4-6 or more for achieving the same accuracy. In addition, the supplementary material shows the error analysis made for the algorithms when the normal-mode transformation of path integral beads is used. PMID:27421393

  4. A simple and accurate algorithm for path integral molecular dynamics with the Langevin thermostat

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Li, Dezhang; Liu, Xinzijian

    2016-07-01

    We introduce a novel simple algorithm for thermostatting path integral molecular dynamics (PIMD) with the Langevin equation. The staging transformation of path integral beads is employed for demonstration. The optimum friction coefficients for the staging modes in the free particle limit are used for all systems. In comparison to the path integral Langevin equation thermostat, the new algorithm exploits a different order of splitting for the phase space propagator associated to the Langevin equation. While the error analysis is made for both algorithms, they are also employed in the PIMD simulations of three realistic systems (the H2O molecule, liquid para-hydrogen, and liquid water) for comparison. It is shown that the new thermostat increases the time interval of PIMD by a factor of 4-6 or more for achieving the same accuracy. In addition, the supplementary material shows the error analysis made for the algorithms when the normal-mode transformation of path integral beads is used.

  5. Numerical parameter constraints for accurate PIC-DSMC simulation of breakdown from arc initiation to stable arcs

    NASA Astrophysics Data System (ADS)

    Moore, Christopher; Hopkins, Matthew; Moore, Stan; Boerner, Jeremiah; Cartwright, Keith

    2015-09-01

    Simulation of breakdown is important for understanding and designing a variety of applications such as mitigating undesirable discharge events. Such simulations need to be accurate through early time arc initiation to late time stable arc behavior. Here we examine constraints on the timestep and mesh size required for arc simulations using the particle-in-cell (PIC) method with direct simulation Monte Carlo (DMSC) collisions. Accurate simulation of electron avalanche across a fixed voltage drop and constant neutral density (reduced field of 1000 Td) was found to require a timestep ~ 1/100 of the mean time between collisions and a mesh size ~ 1/25 the mean free path. These constraints are much smaller than the typical PIC-DSMC requirements for timestep and mesh size. Both constraints are related to the fact that charged particles are accelerated by the external field. Thus gradients in the electron energy distribution function can exist at scales smaller than the mean free path and these must be resolved by the mesh size for accurate collision rates. Additionally, the timestep must be small enough that the particle energy change due to the fields be small in order to capture gradients in the cross sections versus energy. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  6. Implicit numerical integration for periodic solutions of autonomous nonlinear systems

    NASA Technical Reports Server (NTRS)

    Thurston, G. A.

    1982-01-01

    A change of variables that stabilizes numerical computations for periodic solutions of autonomous systems is derived. Computation of the period is decoupled from the rest of the problem for conservative systems of any order and for any second-order system. Numerical results are included for a second-order conservative system under a suddenly applied constant load. Near the critical load for the system, a small increment in load amplitude results in a large increase in amplitude of the response.

  7. Multidimensional Genome-wide Analyses Show Accurate FVIII Integration by ZFN in Primary Human Cells

    PubMed Central

    Sivalingam, Jaichandran; Kenanov, Dimitar; Han, Hao; Nirmal, Ajit Johnson; Ng, Wai Har; Lee, Sze Sing; Masilamani, Jeyakumar; Phan, Toan Thang; Maurer-Stroh, Sebastian; Kon, Oi Lian

    2016-01-01

    Costly coagulation factor VIII (FVIII) replacement therapy is a barrier to optimal clinical management of hemophilia A. Therapy using FVIII-secreting autologous primary cells is potentially efficacious and more affordable. Zinc finger nucleases (ZFN) mediate transgene integration into the AAVS1 locus but comprehensive evaluation of off-target genome effects is currently lacking. In light of serious adverse effects in clinical trials which employed genome-integrating viral vectors, this study evaluated potential genotoxicity of ZFN-mediated transgenesis using different techniques. We employed deep sequencing of predicted off-target sites, copy number analysis, whole-genome sequencing, and RNA-seq in primary human umbilical cord-lining epithelial cells (CLECs) with AAVS1 ZFN-mediated FVIII transgene integration. We combined molecular features to enhance the accuracy and activity of ZFN-mediated transgenesis. Our data showed a low frequency of ZFN-associated indels, no detectable off-target transgene integrations or chromosomal rearrangements. ZFN-modified CLECs had very few dysregulated transcripts and no evidence of activated oncogenic pathways. We also showed AAVS1 ZFN activity and durable FVIII transgene secretion in primary human dermal fibroblasts, bone marrow- and adipose tissue-derived stromal cells. Our study suggests that, with close attention to the molecular design of genome-modifying constructs, AAVS1 ZFN-mediated FVIII integration in several primary human cell types may be safe and efficacious. PMID:26689265

  8. Switched integration amplifier-based photocurrent meter for accurate spectral responsivity measurement of photometers.

    PubMed

    Park, Seongchong; Hong, Kee-Suk; Kim, Wan-Seop

    2016-03-20

    This work introduces a switched integration amplifier (SIA)-based photocurrent meter for femtoampere (fA)-level current measurement, which enables us to measure a 107 dynamic range of spectral responsivity of photometers even with a common lamp-based monochromatic light source. We described design considerations and practices about operational amplifiers (op-amps), switches, readout methods, etc., to compose a stable SIA of low offset current in terms of leakage current and gain peaking in detail. According to the design, we made six SIAs of different integration capacitance and different op-amps and evaluated their offset currents. They showed an offset current of (1.5-85) fA with a slow variation of (0.5-10) fA for an hour under opened input. Applying a detector to the SIA input, the offset current and its variation were increased and the SIA readout became noisier due to finite shunt resistance and nonzero shunt capacitance of the detector. One of the SIAs with 10 pF nominal capacitance was calibrated using a calibrated current source at the current level of 10 nA to 1 fA and at the integration time of 2 to 65,536 ms. As a result, we obtained a calibration formula for integration capacitance as a function of integration time rather than a single capacitance value because the SIA readout showed a distinct dependence on integration time at a given current level. Finally, we applied it to spectral responsivity measurement of a photometer. It is demonstrated that the home-made SIA of 10 pF was capable of measuring a 107 dynamic range of spectral responsivity of a photometer. PMID:27140564

  9. Experimental analysis and numerical modeling of mollusk shells as a three dimensional integrated volume.

    PubMed

    Faghih Shojaei, M; Mohammadi, V; Rajabi, H; Darvizeh, A

    2012-12-01

    In this paper, a new numerical technique is presented to accurately model the geometrical and mechanical features of mollusk shells as a three dimensional (3D) integrated volume. For this purpose, the Newton method is used to solve the nonlinear equations of shell surfaces. The points of intersection on the shell surface are identified and the extra interior parts are removed. Meshing process is accomplished with respect to the coordinate of each point of intersection. The final 3D generated mesh models perfectly describe the spatial configuration of the mollusk shells. Moreover, the computational model perfectly matches with the actual interior geometry of the shells as well as their exterior architecture. The direct generation technique is employed to generate a 3D finite element (FE) model in ANSYS 11. X-ray images are taken to show the close similarity of the interior geometry of the models and the actual samples. A scanning electron microscope (SEM) is used to provide information on the microstructure of the shells. In addition, a set of compression tests were performed on gastropod shell specimens to obtain their ultimate compressive strength. A close agreement between experimental data and the relevant numerical results is demonstrated. PMID:23137621

  10. Numerical validation of MR-measurement-integrated simulation of blood flow in a cerebral aneurysm.

    PubMed

    Funamoto, Kenichi; Suzuki, Yoshitsugu; Hayase, Toshiyuki; Kosugi, Takashi; Isoda, Haruo

    2009-06-01

    This study proposes magnetic resonance (MR)-measurement-integrated (MR-MI) simulation, in which the difference between the computed velocity field and the phase-contrast MRI measurement data is fed back to the numerical simulation. The computational accuracy and the fundamental characteristics, such as steady characteristics and transient characteristics, of the MR-MI simulation were investigated by a numerical experiment. We dealt with reproduction of three-dimensional steady and unsteady blood flow fields in a realistic cerebral aneurysm developed at a bifurcation. The MR-MI simulation reduced the error derived from the incorrect boundary conditions in the blood flow in the cerebral aneurysm. For the reproduction of steady and unsteady standard solutions, the error of velocity decreased to 13% and to 22% in one cardiac cycle, respectively, compared with the ordinary simulation without feedback. Moreover, the application of feedback shortened the computational convergence, and thus the convergent solution and periodic solution were obtained within less computational time in the MR-MI simulation than that in the ordinary simulation. The dividing flow ratio toward the two outlets after bifurcation was well estimated owing to the improvement of computational accuracy. Furthermore, the MR-MI simulation yielded wall shear stress distribution on the cerebral aneurysm of the standard solution accurately and in detail. PMID:19350390

  11. Towards more accurate life cycle risk management through integration of DDP and PRA

    NASA Technical Reports Server (NTRS)

    Cornford, Steven L.; Paulos, Todd; Meshkat, Leila; Feather, Martin

    2003-01-01

    The focus of this paper is on the integration of PRA and DDP. The intent is twofold: to extend risk-based decision though more of the lifecycle, and to lead to improved risk modeling (hence better informed decision making) wherever it is applied, most especially in the early phases as designs begin to mature.

  12. Multi Sensor Data Integration for AN Accurate 3d Model Generation

    NASA Astrophysics Data System (ADS)

    Chhatkuli, S.; Satoh, T.; Tachibana, K.

    2015-05-01

    The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other's weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.

  13. Evaluating Mesoscale Numerical Weather Predictions and Spatially Distributed Meteorologic Forcing Data for Developing Accurate SWE Forecasts over Large Mountain Basins

    NASA Astrophysics Data System (ADS)

    Hedrick, A. R.; Marks, D. G.; Winstral, A. H.; Marshall, H. P.

    2014-12-01

    The ability to forecast snow water equivalent, or SWE, in mountain catchments would benefit many different communities ranging from avalanche hazard mitigation to water resource management. Historical model runs of Isnobal, the physically based energy balance snow model, have been produced over the 2150 km2 Boise River Basin for water years 2012 - 2014 at 100-meter resolution. Spatially distributed forcing parameters such as precipitation, wind, and relative humidity are generated from automated weather stations located throughout the watershed, and are supplied to Isnobal at hourly timesteps. Similarly, the Weather Research & Forecasting (WRF) Model provides hourly predictions of the same forcing parameters from an atmospheric physics perspective. This work aims to quantitatively compare WRF model output to the spatial meteorologic fields developed to force Isnobal, with the hopes of eventually using WRF predictions to create accurate hourly forecasts of SWE over a large mountainous basin.

  14. The numerical integration and 3-D finite element formulation of a viscoelastic model of glass

    SciTech Connect

    Chambers, R.S.

    1994-08-01

    The use of glasses is widespread in making hermetic, insulating seals for many electronic components. Flat panel displays and fiber optic connectors are other products utilizing glass as a structural element. When glass is cooled from sealing temperatures, residual stresses are generated due to mismatches in thermal shrinkage created by the dissimilar material properties of the adjoining materials. Because glass is such a brittle material at room temperature, tensile residual stresses must be kept small to ensure durability and avoid cracking. Although production designs and the required manufacturing process development can be deduced empirically, this is an expensive and time consuming process that does not necessarily lead to an optimal design. Agile manufacturing demands that analyses be used to reduce development costs and schedules by providing insight and guiding the design process through the development cycle. To make these gains, however, viscoelastic models of glass must be available along with the right tool to use them. A viscoelastic model of glass can be used to simulate the stress and volume relaxation that occurs at elevated temperatures as the molecular structure of the glass seeks to equilibrate to the state of the supercooled liquid. The substance of the numerical treatment needed to support the implementation of the model in a 3-D finite element program is presented herein. An accurate second-order, central difference integrator is proposed for the constitutive equations, and numerical solutions are compared to those obtained with other integrators. Inherent convergence problems are reviewed and fixes are described. The resulting algorithms are generally applicable to the broad class of viscoelastic material models. First-order error estimates are used as a basis for developing a scheme for automatic time step controls, and several demonstration problems are presented to illustrate the performance of the methodology.

  15. Construction of the two-electron contribution to the Fock matrix by numerical integration

    NASA Astrophysics Data System (ADS)

    Losilla, Sergio A.; Mehine, Mooses M.; Sundholm, Dage

    2012-10-01

    A novel method to numerically calculate the Fock matrix is presented. The Coulomb operator is re-expressed as an integral identity, which is discretized. The discretization of the auxiliary t dimension separates the x, y, and z dependencies transforming the two-electron Coulomb integrals of Gaussian-type orbitals (GTO) to a linear sum of products of two-dimensional integrals. The s-type integrals are calculated analytically and integrals of the higher angular-momentum functions are obtained using recursion formulae. The contributions to the two-body Coulomb integrals obtained for each discrete t value can be evaluated independently. The two-body Fock matrix elements can be integrated numerically, using common sets of quadrature points and weights. The aim is to calculate Fock matrices of enough accuracy for electronic structure calculations. Preliminary calculations indicate that it is possible to achieve an overall accuracy of at least 10-12 E h using the numerical approach.

  16. Enabling fast, stable and accurate peridynamic computations using multi-time-step integration

    DOE PAGESBeta

    Lindsay, P.; Parks, M. L.; Prakash, A.

    2016-04-13

    Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less

  17. Super-Junction PIN Photodiode to Integrate Optoelectronic Integrated Circuits in Standard Technologies: A Numerical Study

    NASA Astrophysics Data System (ADS)

    Roig, Jaume; Stefanov, Evgueniy; Morancho, Frédéric

    2007-07-01

    The use of super-junction (SJ) techniques in PIN photodiodes is proposed in this letter for the first time with the objective to assist the optoelectronic integrated circuits (OEICs) implementation in complementary metal oxide semiconductor (CMOS), bipolar CMOS (BiCMOS) and bipolar-CMOS-double diffused MOS (BCD) technologies. Its technological viability is also discussed to make it credible as an alternative to other OEICs approaches. Numerical simulation of realistic SJ-PIN devices, widely used in high power electronics, demonstrates the possibility to integrate high-performance CMOS-based OEICs in epitaxial layers with doping concentrations above 1× 1015 cm-3. The induced lateral depletion at low reverse biased voltage, assisted by the alternated N and P-doped pillars, allows high-speed transient response in SJ-PIN detecting wavelengths between 400 and 800 nm. Moreover, other important parameters as the responsivity and the dark current are not degraded in respect to the conventional PIN (C-PIN) structures.

  18. An integrative variant analysis pipeline for accurate genotype/haplotype inference in population NGS data.

    PubMed

    Wang, Yi; Lu, James; Yu, Jin; Gibbs, Richard A; Yu, Fuli

    2013-05-01

    Next-generation sequencing is a powerful approach for discovering genetic variation. Sensitive variant calling and haplotype inference from population sequencing data remain challenging. We describe methods for high-quality discovery, genotyping, and phasing of SNPs for low-coverage (approximately 5×) sequencing of populations, implemented in a pipeline called SNPTools. Our pipeline contains several innovations that specifically address challenges caused by low-coverage population sequencing: (1) effective base depth (EBD), a nonparametric statistic that enables more accurate statistical modeling of sequencing data; (2) variance ratio scoring, a variance-based statistic that discovers polymorphic loci with high sensitivity and specificity; and (3) BAM-specific binomial mixture modeling (BBMM), a clustering algorithm that generates robust genotype likelihoods from heterogeneous sequencing data. Last, we develop an imputation engine that refines raw genotype likelihoods to produce high-quality phased genotypes/haplotypes. Designed for large population studies, SNPTools' input/output (I/O) and storage aware design leads to improved computing performance on large sequencing data sets. We apply SNPTools to the International 1000 Genomes Project (1000G) Phase 1 low-coverage data set and obtain genotyping accuracy comparable to that of SNP microarray. PMID:23296920

  19. Integrative subcellular proteomic analysis allows accurate prediction of human disease-causing genes.

    PubMed

    Zhao, Li; Chen, Yiyun; Bajaj, Amol Onkar; Eblimit, Aiden; Xu, Mingchu; Soens, Zachry T; Wang, Feng; Ge, Zhongqi; Jung, Sung Yun; He, Feng; Li, Yumei; Wensel, Theodore G; Qin, Jun; Chen, Rui

    2016-05-01

    Proteomic profiling on subcellular fractions provides invaluable information regarding both protein abundance and subcellular localization. When integrated with other data sets, it can greatly enhance our ability to predict gene function genome-wide. In this study, we performed a comprehensive proteomic analysis on the light-sensing compartment of photoreceptors called the outer segment (OS). By comparing with the protein profile obtained from the retina tissue depleted of OS, an enrichment score for each protein is calculated to quantify protein subcellular localization, and 84% accuracy is achieved compared with experimental data. By integrating the protein OS enrichment score, the protein abundance, and the retina transcriptome, the probability of a gene playing an essential function in photoreceptor cells is derived with high specificity and sensitivity. As a result, a list of genes that will likely result in human retinal disease when mutated was identified and validated by previous literature and/or animal model studies. Therefore, this new methodology demonstrates the synergy of combining subcellular fractionation proteomics with other omics data sets and is generally applicable to other tissues and diseases. PMID:26912414

  20. PSI: a comprehensive and integrative approach for accurate plant subcellular localization prediction.

    PubMed

    Liu, Lili; Zhang, Zijun; Mei, Qian; Chen, Ming

    2013-01-01

    Predicting the subcellular localization of proteins conquers the major drawbacks of high-throughput localization experiments that are costly and time-consuming. However, current subcellular localization predictors are limited in scope and accuracy. In particular, most predictors perform well on certain locations or with certain data sets while poorly on others. Here, we present PSI, a novel high accuracy web server for plant subcellular localization prediction. PSI derives the wisdom of multiple specialized predictors via a joint-approach of group decision making strategy and machine learning methods to give an integrated best result. The overall accuracy obtained (up to 93.4%) was higher than best individual (CELLO) by ~10.7%. The precision of each predicable subcellular location (more than 80%) far exceeds that of the individual predictors. It can also deal with multi-localization proteins. PSI is expected to be a powerful tool in protein location engineering as well as in plant sciences, while the strategy employed could be applied to other integrative problems. A user-friendly web server, PSI, has been developed for free access at http://bis.zju.edu.cn/psi/. PMID:24194827

  1. Accurate and efficient integration for molecular dynamics simulations at constant temperature and pressure.

    PubMed

    Lippert, Ross A; Predescu, Cristian; Ierardi, Douglas J; Mackenzie, Kenneth M; Eastwood, Michael P; Dror, Ron O; Shaw, David E

    2013-10-28

    In molecular dynamics simulations, control over temperature and pressure is typically achieved by augmenting the original system with additional dynamical variables to create a thermostat and a barostat, respectively. These variables generally evolve on timescales much longer than those of particle motion, but typical integrator implementations update the additional variables along with the particle positions and momenta at each time step. We present a framework that replaces the traditional integration procedure with separate barostat, thermostat, and Newtonian particle motion updates, allowing thermostat and barostat updates to be applied infrequently. Such infrequent updates provide a particularly substantial performance advantage for simulations parallelized across many computer processors, because thermostat and barostat updates typically require communication among all processors. Infrequent updates can also improve accuracy by alleviating certain sources of error associated with limited-precision arithmetic. In addition, separating the barostat, thermostat, and particle motion update steps reduces certain truncation errors, bringing the time-average pressure closer to its target value. Finally, this framework, which we have implemented on both general-purpose and special-purpose hardware, reduces software complexity and improves software modularity. PMID:24182003

  2. Numerical integration of population models satisfying conservation laws: NSFD methods.

    PubMed

    Mickens, Ronald E

    2007-10-01

    Population models arising in ecology, epidemiology and mathematical biology may involve a conservation law, i.e. the total population is constant. In addition to these cases, other situations may occur for which the total population, asymptotically in time, approach a constant value. Since it is rarely the situation that the equations of motion can be analytically solved to obtain exact solutions, it follows that numerical techniques are needed to provide solutions. However, numerical procedures are only valid if they can reproduce fundamental properties of the differential equations modeling the phenomena of interest. We show that for population models, involving a dynamical conservation law the use of nonstandard finite difference (NSFD) methods allows the construction of discretization schemes such that they are dynamically consistent (DC) with the original differential equations. The paper will briefly discuss the NSFD methodology, the concept of DC, and illustrate their application to specific problems for population models. PMID:22876826

  3. Impact of numerical integration on gas curtain simulations

    SciTech Connect

    Rider, W.; Kamm, J.

    2000-11-01

    In recent years, we have presented a less than glowing experimental comparison of hydrodynamic codes with the gas curtain experiment (e.g., Kamm et al. 1999a). Here, we discuss the manner in which the details of the hydrodynamic integration techniques may conspire to produce poor results. This also includes some progress in improving the results and agreement with experimental results. Because our comparison was conducted on the details of the experimental images (i.e., their detailed structural information), our results do not conflict with previously published results of good agreement with Richtmyer-Meshkov instabilities based on the integral scale of mixing. New experimental and analysis techniques are also discussed.

  4. Accurate Multiview Stereo Reconstruction with Fast Visibility Integration and Tight Disparity Bounding

    NASA Astrophysics Data System (ADS)

    Toldo, R.; Fantini, F.; Giona, L.; Fantoni, S.; Fusiello, A.

    2013-02-01

    A novel multi-view stereo reconstruction method is presented. The algorithm is focused on accuracy and it is highly engineered with some parts taking advantage of the graphics processing unit. In addition, it is seamlessly integrated with the output of a structure and motion pipeline. In the first part of the algorithm a depth map is extracted independently for each image. The final depth map is generated from the depth hypothesis using a Markov random field optimization technique over the image grid. An octree data structure accumulates the votes coming from each depth map. A novel procedure to remove rogue points is proposed that takes into account the visibility information and the matching score of each point. Finally a texture map is built by wisely making use of both the visibility and the view angle informations. Several results show the effectiveness of the algorithm under different working scenarios.

  5. Numerical implications of stabilization by the use of integrals

    NASA Technical Reports Server (NTRS)

    Beaudet, P. R.

    1975-01-01

    Liapunov or energy restraint methods for dynamic stabilization in two body motion perturbation problems are considered. Results of computerized orbital stabilization estimates show that the application of energy restraint prevents the occurrence of consistent timing errors in the stepwise integration of equations of motion for a nearly circular orbit.

  6. Numerical simulation of scattering of acoustic waves by inelastic bodies using hypersingular boundary integral equation

    SciTech Connect

    Daeva, S.G.; Setukha, A.V.

    2015-03-10

    A numerical method for solving a problem of diffraction of acoustic waves by system of solid and thin objects based on the reduction the problem to a boundary integral equation in which the integral is understood in the sense of finite Hadamard value is proposed. To solve this equation we applied piecewise constant approximations and collocation methods numerical scheme. The difference between the constructed scheme and earlier known is in obtaining approximate analytical expressions to appearing system of linear equations coefficients by separating the main part of the kernel integral operator. The proposed numerical scheme is tested on the solution of the model problem of diffraction of an acoustic wave by inelastic sphere.

  7. Advances in numerical solutions to integral equations in liquid state theory

    NASA Astrophysics Data System (ADS)

    Howard, Jesse J.

    Solvent effects play a vital role in the accurate description of the free energy profile for solution phase chemical and structural processes. The inclusion of solvent effects in any meaningful theoretical model however, has proven to be a formidable task. Generally, methods involving Poisson-Boltzmann (PB) theory and molecular dynamic (MD) simulations are used, but they either fail to accurately describe the solvent effects or require an exhaustive computation effort to overcome sampling problems. An alternative to these methods are the integral equations (IEs) of liquid state theory which have become more widely applicable due to recent advancements in the theory of interaction site fluids and the numerical methods to solve the equations. In this work a new numerical method is developed based on a Newton-type scheme coupled with Picard/MDIIS routines. To extend the range of these numerical methods to large-scale data systems, the size of the Jacobian is reduced using basis functions, and the Newton steps are calculated using a GMRes solver. The method is then applied to calculate solutions to the 3D reference interaction site model (RISM) IEs of statistical mechanics, which are derived from first principles, for a solute model of a pair of parallel graphene plates at various separations in pure water. The 3D IEs are then extended to electrostatic models using an exact treatment of the long-range Coulomb interactions for negatively charged walls and DNA duplexes in aqueous electrolyte solutions to calculate the density profiles and solution thermodynamics. It is found that the 3D-IEs provide a qualitative description of the density distributions of the solvent species when compared to MD results, but at a much reduced computational effort in comparison to MD simulations. The thermodynamics of the solvated systems are also qualitatively reproduced by the IE results. The findings of this work show the IEs to be a valuable tool for the study and prediction of

  8. Integrated numerical methods for hypersonic aircraft cooling systems analysis

    NASA Technical Reports Server (NTRS)

    Petley, Dennis H.; Jones, Stuart C.; Dziedzic, William M.

    1992-01-01

    Numerical methods have been developed for the analysis of hypersonic aircraft cooling systems. A general purpose finite difference thermal analysis code is used to determine areas which must be cooled. Complex cooling networks of series and parallel flow can be analyzed using a finite difference computer program. Both internal fluid flow and heat transfer are analyzed, because increased heat flow causes a decrease in the flow of the coolant. The steady state solution is a successive point iterative method. The transient analysis uses implicit forward-backward differencing. Several examples of the use of the program in studies of hypersonic aircraft and rockets are provided.

  9. EZ-Rhizo: integrated software for the fast and accurate measurement of root system architecture.

    PubMed

    Armengaud, Patrick; Zambaux, Kevin; Hills, Adrian; Sulpice, Ronan; Pattison, Richard J; Blatt, Michael R; Amtmann, Anna

    2009-03-01

    The root system is essential for the growth and development of plants. In addition to anchoring the plant in the ground, it is the site of uptake of water and minerals from the soil. Plant root systems show an astonishing plasticity in their architecture, which allows for optimal exploitation of diverse soil structures and conditions. The signalling pathways that enable plants to sense and respond to changes in soil conditions, in particular nutrient supply, are a topic of intensive research, and root system architecture (RSA) is an important and obvious phenotypic output. At present, the quantitative description of RSA is labour intensive and time consuming, even using the currently available software, and the lack of a fast RSA measuring tool hampers forward and quantitative genetics studies. Here, we describe EZ-Rhizo: a Windows-integrated and semi-automated computer program designed to detect and quantify multiple RSA parameters from plants growing on a solid support medium. The method is non-invasive, enabling the user to follow RSA development over time. We have successfully applied EZ-Rhizo to evaluate natural variation in RSA across 23 Arabidopsis thaliana accessions, and have identified new RSA determinants as a basis for future quantitative trait locus (QTL) analysis. PMID:19000163

  10. iCut: an Integrative Cut Algorithm Enables Accurate Segmentation of Touching Cells

    PubMed Central

    He, Yong; Gong, Hui; Xiong, Benyi; Xu, Xiaofeng; Li, Anan; Jiang, Tao; Sun, Qingtao; Wang, Simin; Luo, Qingming; Chen, Shangbin

    2015-01-01

    Individual cells play essential roles in the biological processes of the brain. The number of neurons changes during both normal development and disease progression. High-resolution imaging has made it possible to directly count cells. However, the automatic and precise segmentation of touching cells continues to be a major challenge for massive and highly complex datasets. Thus, an integrative cut (iCut) algorithm, which combines information regarding spatial location and intervening and concave contours with the established normalized cut, has been developed. iCut involves two key steps: (1) a weighting matrix is first constructed with the abovementioned information regarding the touching cells and (2) a normalized cut algorithm that uses the weighting matrix is implemented to separate the touching cells into isolated cells. This novel algorithm was evaluated using two types of data: the open SIMCEP benchmark dataset and our micro-optical imaging dataset from a Nissl-stained mouse brain. It has achieved a promising recall/precision of 91.2 ± 2.1%/94.1 ± 1.8% and 86.8 ± 4.1%/87.5 ± 5.7%, respectively, for the two datasets. As quantified using the harmonic mean of recall and precision, the accuracy of iCut is higher than that of some state-of-the-art algorithms. The better performance of this fully automated algorithm can benefit studies of brain cytoarchitecture. PMID:26168908

  11. Accurate simulation of two-dimensional optical microcavities with uniquely solvable boundary integral equations and trigonometric Galerkin discretization.

    PubMed

    Boriskina, Svetlana V; Sewell, Phillip; Benson, Trevor M; Nosich, Alexander I

    2004-03-01

    A fast and accurate method is developed to compute the natural frequencies and scattering characteristics of arbitrary-shape two-dimensional dielectric resonators. The problem is formulated in terms of a uniquely solvable set of second-kind boundary integral equations and discretized by the Galerkin method with angular exponents as global test and trial functions. The log-singular term is extracted from one of the kernels, and closed-form expressions are derived for the main parts of all the integral operators. The resulting discrete scheme has a very high convergence rate. The method is used in the simulation of several optical microcavities for modern dense wavelength-division-multiplexed systems. PMID:15005404

  12. AN ACCURATE ORBITAL INTEGRATOR FOR THE RESTRICTED THREE-BODY PROBLEM AS A SPECIAL CASE OF THE DISCRETE-TIME GENERAL THREE-BODY PROBLEM

    SciTech Connect

    Minesaki, Yukitaka

    2013-08-01

    For the restricted three-body problem, we propose an accurate orbital integration scheme that retains all conserved quantities of the two-body problem with two primaries and approximately preserves the Jacobi integral. The scheme is obtained by taking the limit as mass approaches zero in the discrete-time general three-body problem. For a long time interval, the proposed scheme precisely reproduces various periodic orbits that cannot be accurately computed by other generic integrators.

  13. Accurate Prediction of Transposon-Derived piRNAs by Integrating Various Sequential and Physicochemical Features

    PubMed Central

    Luo, Longqiang; Li, Dingfang; Zhang, Wen; Tu, Shikui; Zhu, Xiaopeng; Tian, Gang

    2016-01-01

    Background Piwi-interacting RNA (piRNA) is the largest class of small non-coding RNA molecules. The transposon-derived piRNA prediction can enrich the research contents of small ncRNAs as well as help to further understand generation mechanism of gamete. Methods In this paper, we attempt to differentiate transposon-derived piRNAs from non-piRNAs based on their sequential and physicochemical features by using machine learning methods. We explore six sequence-derived features, i.e. spectrum profile, mismatch profile, subsequence profile, position-specific scoring matrix, pseudo dinucleotide composition and local structure-sequence triplet elements, and systematically evaluate their performances for transposon-derived piRNA prediction. Finally, we consider two approaches: direct combination and ensemble learning to integrate useful features and achieve high-accuracy prediction models. Results We construct three datasets, covering three species: Human, Mouse and Drosophila, and evaluate the performances of prediction models by 10-fold cross validation. In the computational experiments, direct combination models achieve AUC of 0.917, 0.922 and 0.992 on Human, Mouse and Drosophila, respectively; ensemble learning models achieve AUC of 0.922, 0.926 and 0.994 on the three datasets. Conclusions Compared with other state-of-the-art methods, our methods can lead to better performances. In conclusion, the proposed methods are promising for the transposon-derived piRNA prediction. The source codes and datasets are available in S1 File. PMID:27074043

  14. Integrating Data from Several Remotely Sensed Platforms to Accurately Map Wetlands

    NASA Astrophysics Data System (ADS)

    Corcoran, Jennifer Marie

    Traditional wetland mapping methods are in need of modernization. They typically depend solely on few dates of optical imagery, cloud-free data acquisition, and therefore surface features of interest are often obstructed, inaccurately mapped, or not present during data acquisition. Despite the limitations of data acquisition during cloud-free daylight only, multi-temporal multi-spectral optical data are still highly valuable for mapping wetlands and classifying wetland type. However, radar sensors are unique in that they are insensitive to atmospheric and low light conditions, and thus can offer more consistent multi-temporal image acquisition. Unique characteristics about surface scattering mechanisms, such as saturated extent of wetlands, can be found by utilizing both the intensity and phase information from multiple polarizations and multiple wavelengths of radar data. In addition, information from lidar can reveal important details about the variability and structure of surface features, and the potentiality of water to collect in certain areas. The research presented in this dissertation will show important developments in wetland mapping by integrating several platforms of remotely sensed data, including: two sources of radar data including fully polarimetric RADARSAT-2 data (C-band) and dual-pol PALSAR data (L-band); two sources of optical data including Landsat TM imagery and aerial orthophotos, lidar point cloud data with intensity and derived topographic indices. Decision tree classification using the random forest model will be utilized to take advantage of the unique differences in these data. Assessments of outputs from random forest will be used to identify the most significant data sources for two levels of land cover classification: discriminating between water, wetland and upland areas, and sub-classifying wetland type. It is expected that results from this research will deliver a valuable, affordable, and practical wetland probability tool to aid

  15. Many Is Better Than One: An Integration of Multiple Simple Strategies for Accurate Lung Segmentation in CT Images

    PubMed Central

    Zhao, Minghua; Liu, Yonghong; Feng, Yaning; Zhang, Ming; He, Lifeng; Suzuki, Kenji

    2016-01-01

    Accurate lung segmentation is an essential step in developing a computer-aided lung disease diagnosis system. However, because of the high variability of computerized tomography (CT) images, it remains a difficult task to accurately segment lung tissue in CT slices using a simple strategy. Motived by the aforementioned, a novel CT lung segmentation method based on the integration of multiple strategies was proposed in this paper. Firstly, in order to avoid noise, the input CT slice was smoothed using the guided filter. Then, the smoothed slice was transformed into a binary image using an optimized threshold. Next, a region growing strategy was employed to extract thorax regions. Then, lung regions were segmented from the thorax regions using a seed-based random walk algorithm. The segmented lung contour was then smoothed and corrected with a curvature-based correction method on each axis slice. Finally, with the lung masks, the lung region was automatically segmented from a CT slice. The proposed method was validated on a CT database consisting of 23 scans, including a number of 883 2D slices (the number of slices per scan is 38 slices), by comparing it to the commonly used lung segmentation method. Experimental results show that the proposed method accurately segmented lung regions in CT slices.

  16. An Integrated Numerical Hydrodynamic Shallow Flow-Solute Transport Model for Urban Area

    NASA Astrophysics Data System (ADS)

    Alias, N. A.; Mohd Sidek, L.

    2016-03-01

    The rapidly changing on land profiles in the some urban areas in Malaysia led to the increasing of flood risk. Extensive developments on densely populated area and urbanization worsen the flood scenario. An early warning system is really important and the popular method is by numerically simulating the river and flood flows. There are lots of two-dimensional (2D) flood model predicting the flood level but in some circumstances, still it is difficult to resolve the river reach in a 2D manner. A systematic early warning system requires a precisely prediction of flow depth. Hence a reliable one-dimensional (1D) model that provides accurate description of the flow is essential. Research also aims to resolve some of raised issues such as the fate of pollutant in river reach by developing the integrated hydrodynamic shallow flow-solute transport model. Presented in this paper are results on flow prediction for Sungai Penchala and the convection-diffusion of solute transports simulated by the developed model.

  17. Theoretical study of the partial derivatives produced by numerical integration of satellite orbits.

    NASA Astrophysics Data System (ADS)

    Hadjifotinou, K. G.; Ichtiaroglou, S.

    1997-06-01

    For the two-body system Saturn-Mimas and the theoretical three-body non-resonant system Saturn-Mimas-Tethys we present a theoretical analysis of the behaviour of the partial derivatives of the satellites' coordinates with respect to the parameters of the system, namely the satellites' initial conditions and their mass-ratios over Saturn. With the use of Floquet theory for the stability of periodic orbits we prove that all the partial derivatives have amplitudes that increase linearly with time. Their motion is a combination of periodic motions the periods of which can also be accurately predicted by the theory. This theoretical model can be used for checking the accuracy of the results of the different numerical integration methods used on satellite systems with the purpose of fitting the results to observations or analytical theories. On this basis, in the last part of the paper we extend the investigation of Hadjifotinou & Harper (1995A&A...303..940H) on the stability and efficience of the 10^th^-order Gauss-Jackson backward difference and the Runge-Kutta-Nystroem RKN12(10)17M methods by now applying them to the above mentioned three-body system.

  18. iPE-MMR: An integrated approach to accurately assign monoisotopic precursor masses to tandem mass spectrometric data

    PubMed Central

    Jung, Hee-Jung; Purvine, Samuel O.; Kim, Hokeun; Petyuk, Vladislav A.; Hyung, Seok-Won; Monroe, Matthew E.; Mun, Dong-Gi; Kim, Kyong-Chul; Park, Jong-Moon; Kim, Su-Jin; Tolic, Nikola; Slysz, Gordon W.; Moore, Ronald J.; Zhao, Rui; Adkins, Joshua N.; Anderson, Gordon A.; Lee, Hookeun; Camp, David G.; Yu, Myeong-Hee; Smith, Richard D.; Lee, Sang-Won

    2010-01-01

    Accurate assignment of monoisotopic precursor masses to tandem mass spectrometric (MS/MS) data is a fundamental and critically important step for successful peptide identifications in mass spectrometry based proteomics. Here we describe an integrated approach that combines three previously reported methods of treating MS/MS data for precursor mass refinement. This combined method, “integrated Post-Experiment Monoisotopic Mass Refinement” (iPE-MMR), integrates steps: 1) generation of refined MS/MS data by DeconMSn; 2) additional refinement of the resultant MS/MS data by a modified version of PE-MMR; 3) elimination of systematic errors of precursor masses using DtaRefinery. iPE-MMR is the first method that utilizes all MS information from multiple MS scans of a precursor ion including multiple charge states, in an MS scan, to determine precursor mass. By combining these methods, iPE-MMR increases sensitivity in peptide identification and provides increased accuracy when applied to complex high-throughput proteomics data. PMID:20863060

  19. The gated integration technique for the accurate measurement of the autocorrelation function of speckle intensities scattered from random phase screens

    NASA Astrophysics Data System (ADS)

    Zhang, Ningyu; Cheng, Chuanfu; Teng, Shuyun; Chen, Xiaoyi; Xu, Zhizhan

    2007-09-01

    A new approach based on the gated integration technique is proposed for the accurate measurement of the autocorrelation function of speckle intensities scattered from a random phase screen. The Boxcar used for this technique in the acquisition of the speckle intensity data integrates the photoelectric signal during its sampling gate open, and it repeats the sampling by a preset number, m. The average analog of the m samplings output by the Boxcar enhances the signal-to-noise ratio by √{m}, because the repeated sampling and the average make the useful speckle signals stable, while the randomly varied photoelectric noise is suppressed by 1/√{m}. In the experiment, we use an analog-to-digital converter module to synchronize all the actions such as the stepped movement of the phase screen, the repeated sampling, the readout of the averaged output of the Boxcar, etc. The experimental results show that speckle signals are better recovered from contaminated signals, and the autocorrelation function with the secondary maximum is obtained, indicating that the accuracy of the measurement of the autocorrelation function is greatly improved by the gated integration technique.

  20. An efficient step-size control method in numerical integration for astrodynamical equations

    NASA Astrophysics Data System (ADS)

    Liu, C. Z.; Cui, D. X.

    2002-11-01

    Using the curvature of the integral curve, a step-size control method is introduced in this paper. This method will prove to be the efficient scheme in the sense that it saves computation time and improve accuracy of numerical integration.

  1. Integrated numeric and symbolic signal processing using a heterogeneous design environment

    NASA Astrophysics Data System (ADS)

    Mani, Ramamurthy; Nawab, S. Hamid; Winograd, Joseph M.; Evans, Brian L.

    1996-10-01

    We present a solution to a complex multi-tone transient detection problem to illustrate the integrated use of symbolic and numeric processing techniques which are supported by well-established underlying models. Examples of such models include synchronous dataflow for numeric processing and the blackboard paradigm for symbolic heuristic search. Our transient detection solution serves to emphasize the importance of developing system design methods and tools which can support the integrated use of well- established symbolic and numerical models of computation. Recently, we incorporated a blackboard-based model of computation underlying the Integrated Processing and Understanding of Signals (IPUS) paradigm into a system-level design environment for numeric processing called Ptolemy. Using the IPUS/Ptolemy environment, we are implementing our solution to the multi-tone transient detection problem.

  2. Numerical solution of a class of integral equations arising in two-dimensional aerodynamics

    NASA Technical Reports Server (NTRS)

    Fromme, J.; Golberg, M. A.

    1978-01-01

    We consider the numerical solution of a class of integral equations arising in the determination of the compressible flow about a thin airfoil in a ventilated wind tunnel. The integral equations are of the first kind with kernels having a Cauchy singularity. Using appropriately chosen Hilbert spaces, it is shown that the kernel gives rise to a mapping which is the sum of a unitary operator and a compact operator. This allows the problem to be studied in terms of an equivalent integral equation of the second kind. A convergent numerical algorithm for its solution is derived by using Galerkin's method. It is shown that this algorithm is numerically equivalent to Bland's collocation method, which is then used as the method of computation. Extensive numerical calculations are presented establishing the validity of the theory.

  3. Path-Integral Renormalization Group Method for Numerical Study of Strongly Correlated Electron Systems

    NASA Astrophysics Data System (ADS)

    Imada, Masatoshi; Kashima, Tsuyoshi

    2000-09-01

    A numerical algorithm for studying strongly correlated electron systems is proposed. The groundstate wavefunction is projected out after a numerical renormalization procedure in the path integral formalism. The wavefunction is expressed from the optimized linear combination of retained states in the truncated Hilbert space with a numerically chosen basis. This algorithm does not suffer from the negative sign problem and can be applied to any type of Hamiltonian in any dimension. The efficiency is tested in examples of the Hubbard model where the basis of Slater determinants is numerically optimized. We show results on fast convergence and accuracy achieved with a small number of retained states.

  4. A novel, integrated PET-guided MRS technique resulting in more accurate initial diagnosis of high-grade glioma.

    PubMed

    Kim, Ellen S; Satter, Martin; Reed, Marilyn; Fadell, Ronald; Kardan, Arash

    2016-06-01

    Glioblastoma multiforme (GBM) is the most common and lethal malignant glioma in adults. Currently, the modality of choice for diagnosing brain tumor is high-resolution magnetic resonance imaging (MRI) with contrast, which provides anatomic detail and localization. Studies have demonstrated, however, that MRI may have limited utility in delineating the full tumor extent precisely. Studies suggest that MR spectroscopy (MRS) can also be used to distinguish high-grade from low-grade gliomas. However, due to operator dependent variables and the heterogeneous nature of gliomas, the potential for error in diagnostic accuracy with MRS is a concern. Positron emission tomography (PET) imaging with (11)C-methionine (MET) and (18)F-fluorodeoxyglucose (FDG) has been shown to add additional information with respect to tumor grade, extent, and prognosis based on the premise of biochemical changes preceding anatomic changes. Combined PET/MRS is a technique that integrates information from PET in guiding the location for the most accurate metabolic characterization of a lesion via MRS. We describe a case of glioblastoma multiforme in which MRS was initially non-diagnostic for malignancy, but when MRS was repeated with PET guidance, demonstrated elevated choline/N-acetylaspartate (Cho/NAA) ratio in the right parietal mass consistent with a high-grade malignancy. Stereotactic biopsy, followed by PET image-guided resection, confirmed the diagnosis of grade IV GBM. To our knowledge, this is the first reported case of an integrated PET/MRS technique for the voxel placement of MRS. Our findings suggest that integrated PET/MRS may potentially improve diagnostic accuracy in high-grade gliomas. PMID:27122050

  5. A comparison of the efficiency of numerical methods for integrating chemical kinetic rate equations

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    1984-01-01

    The efficiency of several algorithms used for numerical integration of stiff ordinary differential equations was compared. The methods examined included two general purpose codes EPISODE and LSODE and three codes (CHEMEQ, CREK1D and GCKP84) developed specifically to integrate chemical kinetic rate equations. The codes were applied to two test problems drawn from combustion kinetics. The comparisons show that LSODE is the fastest code available for the integration of combustion kinetic rate equations. It is shown that an iterative solution of the algebraic energy conservation equation to compute the temperature can be more efficient then evaluating the temperature by integrating its time-derivative.

  6. Controlled time integration for the numerical simulation of meteor radar reflections

    NASA Astrophysics Data System (ADS)

    Räbinä, Jukka; Mönkölä, Sanna; Rossi, Tuomo; Markkanen, Johannes; Gritsevich, Maria; Muinonen, Karri

    2016-07-01

    We model meteoroids entering the Earth's atmosphere as objects surrounded by non-magnetized plasma, and consider efficient numerical simulation of radar reflections from meteors in the time domain. Instead of the widely used finite difference time domain method (FDTD), we use more generalized finite differences by applying the discrete exterior calculus (DEC) and non-uniform leapfrog-style time discretization. The computational domain is presented by convex polyhedral elements. The convergence of the time integration is accelerated by the exact controllability method. The numerical experiments show that our code is efficiently parallelized. The DEC approach is compared to the volume integral equation (VIE) method by numerical experiments. The result is that both methods are competitive in modelling non-magnetized plasma scattering. For demonstrating the simulation capabilities of the DEC approach, we present numerical experiments of radar reflections and vary parameters in a wide range.

  7. Orbit determination based on meteor observations using numerical integration of equations of motion

    NASA Astrophysics Data System (ADS)

    Dmitriev, Vasily; Lupovka, Valery; Gritsevich, Maria

    2015-11-01

    Recently, there has been a worldwide proliferation of instruments and networks dedicated to observing meteors, including airborne and future space-based monitoring systems . There has been a corresponding rapid rise in high quality data accumulating annually. In this paper, we present a method embodied in the open-source software program "Meteor Toolkit", which can effectively and accurately process these data in an automated mode and discover the pre-impact orbit and possibly the origin or parent body of a meteoroid or asteroid. The required input parameters are the topocentric pre-atmospheric velocity vector and the coordinates of the atmospheric entry point of the meteoroid, i.e. the beginning point of visual path of a meteor, in an Earth centered-Earth fixed coordinate system, the International Terrestrial Reference Frame (ITRF). Our method is based on strict coordinate transformation from the ITRF to an inertial reference frame and on numerical integration of the equations of motion for a perturbed two-body problem. Basic accelerations perturbing a meteoroid's orbit and their influence on the orbital elements are also studied and demonstrated. Our method is then compared with several published studies that utilized variations of a traditional analytical technique, the zenith attraction method, which corrects for the direction of the meteor's trajectory and its apparent velocity due to Earth's gravity. We then demonstrate the proposed technique on new observational data obtained from the Finnish Fireball Network (FFN) as well as on simulated data. In addition, we propose a method of analysis of error propagation, based on general rule of covariance transformation.

  8. Numerical integration of the stochastic Landau-Lifshitz-Gilbert equation in generic time-discretization schemes.

    PubMed

    Romá, Federico; Cugliandolo, Leticia F; Lozano, Gustavo S

    2014-08-01

    We introduce a numerical method to integrate the stochastic Landau-Lifshitz-Gilbert equation in spherical coordinates for generic discretization schemes. This method conserves the magnetization modulus and ensures the approach to equilibrium under the expected conditions. We test the algorithm on a benchmark problem: the dynamics of a uniformly magnetized ellipsoid. We investigate the influence of various parameters, and in particular, we analyze the efficiency of the numerical integration, in terms of the number of steps needed to reach a chosen long time with a given accuracy. PMID:25215839

  9. Numerical evaluation of electron repulsion integrals for pseudoatomic orbitals and their derivatives.

    PubMed

    Toyoda, Masayuki; Ozaki, Taisuke

    2009-03-28

    A numerical method to calculate the four-center electron-repulsion integrals for strictly localized pseudoatomic orbital basis sets has been developed. Compared to the conventional Gaussian expansion method, this method has an advantage in the ease of combination with O(N) density functional calculations. Additional mathematical derivations are also presented including the analytic derivatives of the integrals with respect to atomic positions and spatial damping of the Coulomb interaction due to the screening effect. In the numerical test for a simple molecule, the convergence up to 10(-5) hartree in energy is successfully obtained with a feasible cost of computation. PMID:19334815

  10. Numerical solutions to ill-posed and well-posed impedance boundary condition integral equations

    NASA Astrophysics Data System (ADS)

    Rogers, J. R.

    1983-11-01

    Exterior scattering from a three-dimensional impedance body can be formulated in terms of various integral equations derived from the Leontovich impedance boundary condition (IBC). The electric and magnetic field integral equations are ill-posed because they theoretically admit spurious solutions at the frequencies of interior perfect conductor cavity resonances. A combined field formulation is well-posed because it does not allow the spurious solutions. This report outlines the derivation of IBC integral equations and describes a procedure for constructing moment-method solutions for bodies of revolution. Numerical results for scattering from impedance spheres are presented which contrast the stability and accuracy of solutions to the ill-posed equations with those of the well-posed equation. The results show that numerical solutions for exterior scattering to the electric and magnetic field integral equations can be severely contaminated by spurious resonant solutions regardless of whether the surface impedance of the body is lossy or lossless.

  11. Accurate path integral molecular dynamics simulation of ab-initio water at near-zero added cost

    NASA Astrophysics Data System (ADS)

    Elton, Daniel; Fritz, Michelle; Soler, José; Fernandez-Serra, Marivi

    It is now established that nuclear quantum motion plays an important role in determining water's structure and dynamics. These effects are important to consider when evaluating DFT functionals and attempting to develop better ones for water. The standard way of treating nuclear quantum effects, path integral molecular dynamics (PIMD), multiplies the number of energy/force calculations by the number of beads, which is typically 32. Here we introduce a method whereby PIMD can be incorporated into a DFT molecular dynamics simulation at virtually zero cost. The method is based on the cluster (many body) expansion of the energy. We first subtract the DFT monomer energies, using a custom DFT-based monomer potential energy surface. The evolution of the PIMD beads is then performed using only the more-accurate Partridge-Schwenke monomer energy surface. The DFT calculations are done using the centroid positions. Various bead thermostats can be employed to speed up the sampling of the quantum ensemble. The method bears some resemblance to multiple timestep algorithms and other schemes used to speed up PIMD with classical force fields. We show that our method correctly captures some of key effects of nuclear quantum motion on both the structure and dynamics of water. We acknowledge support from DOE Award No. DE-FG02-09ER16052 (D.E.) and DOE Early Career Award No. DE-SC0003871 (M.V.F.S.).

  12. Abstract Applets: A Method for Integrating Numerical Problem Solving into the Undergraduate Physics Curriculum

    SciTech Connect

    Peskin, Michael E

    2003-02-13

    In upper-division undergraduate physics courses, it is desirable to give numerical problem-solving exercises integrated naturally into weekly problem sets. I explain a method for doing this that makes use of the built-in class structure of the Java programming language. I also supply a Java class library that can assist instructors in writing programs of this type.

  13. NUMERICAL APPROXIMATION OF SEMI-INTEGRALS AND SEMIDERIVATIVES BY PRODUCT QUADRATURE RULES

    EPA Science Inventory

    This paper is concerned with the numerical calculation of the semi-integral and semiderivative of a function f, whose values f (xj) are known on a discrete set of abscissas 0 = x(1) < x(2) < ... < x(n). A family of product quadrature rules is developed to approximate the semi-int...

  14. Some numerical methods for integrating systems of first-order ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Clark, N. W.

    1969-01-01

    Report on numerical methods of integration includes the extrapolation methods of Bulirsch-Stoer and Neville. A comparison is made nith the Runge-Kutta and Adams-Moulton methods, and circumstances are discussed under which the extrapolation method may be preferred.

  15. On the stability of numerical integration routines for ordinary differential equations.

    NASA Technical Reports Server (NTRS)

    Glover, K.; Willems, J. C.

    1973-01-01

    Numerical integration methods for the solution of initial value problems for ordinary vector differential equations may be modelled as discrete time feedback systems. The stability criteria discovered in modern control theory are applied to these systems and criteria involving the routine, the step size and the differential equation are derived. Linear multistep, Runge-Kutta, and predictor-corrector methods are all investigated.

  16. Accurate multiscale finite element method for numerical simulation of two-phase flow in fractured media using discrete-fracture model

    NASA Astrophysics Data System (ADS)

    Zhang, Na; Yao, Jun; Huang, Zhaoqin; Wang, Yueying

    2013-06-01

    Numerical simulation in naturally fractured media is challenging because of the coexistence of porous media and fractures on multiple scales that need to be coupled. We present a new approach to reservoir simulation that gives accurate resolution of both large-scale and fine-scale flow patterns. Multiscale methods are suitable for this type of modeling, because it enables capturing the large scale behavior of the solution without solving all the small features. Dual-porosity models in view of their strength and simplicity can be mainly used for sugar-cube representation of fractured media. In such a representation, the transfer function between the fracture and the matrix block can be readily calculated for water-wet media. For a mixed-wet system, the evaluation of the transfer function becomes complicated due to the effect of gravity. In this work, we use a multiscale finite element method (MsFEM) for two-phase flow in fractured media using the discrete-fracture model. By combining MsFEM with the discrete-fracture model, we aim towards a numerical scheme that facilitates fractured reservoir simulation without upscaling. MsFEM uses a standard Darcy model to approximate the pressure and saturation on a coarse grid, whereas fine scale effects are captured through basis functions constructed by solving local flow problems using the discrete-fracture model. The accuracy and the robustness of MsFEM are shown through several examples. In the first example, we consider several small fractures in a matrix and then compare the results solved by the finite element method. Then, we use the MsFEM in more complex models. The results indicate that the MsFEM is a promising path toward direct simulation of highly resolution geomodels.

  17. The Fourier transform method and the SD-bar approach for the analytical and numerical treatment of multicenter overlap-like quantum similarity integrals

    SciTech Connect

    Safouhi, Hassan . E-mail: hassan.safouhi@ualberta.ca; Berlu, Lilian

    2006-07-20

    Molecular overlap-like quantum similarity measurements imply the evaluation of overlap integrals of two molecular electronic densities related by Dirac delta function. When the electronic densities are expanded over atomic orbitals using the usual LCAO-MO approach (linear combination of atomic orbitals), overlap-like quantum similarity integrals could be expressed in terms of four-center overlap integrals. It is shown that by introducing the Fourier transform of delta Dirac function in the integrals and using the Fourier transform approach combined with the so-called B functions, one can obtain analytic expressions of the integrals under consideration. These analytic expressions involve highly oscillatory semi-infinite spherical Bessel functions, which are the principal source of severe numerical and computational difficulties. In this work, we present a highly efficient algorithm for a fast and accurate numerical evaluation of these multicenter overlap-like quantum similarity integrals over Slater type functions. This algorithm is based on the SD-bar approach due to Safouhi. Recurrence formulae are used for a better control of the degree of accuracy and for a better stability of the algorithm. The numerical result section shows the efficiency of our algorithm, compared with the alternatives using the one-center two-range expansion method, which led to very complicated analytic expressions, the epsilon algorithm and the nonlinear D-bar transformation.

  18. On numerical integration with high-order quadratures: with application to the Rayleigh-Sommerfeld integral

    NASA Astrophysics Data System (ADS)

    Evans, W. A. B.; Torre, A.

    2012-11-01

    The paper focusses on the advantages of using high-order Gauss-Legendre quadratures for the precise evaluation of integrals with both smooth and rapidly changing integrands. Aspects of their precision are analysed in the light of Gauss' error formula. Some "test examples" are considered and evaluated in multiple precision to ≈ 200 significant decimal digits with David Bailey's multiprecision package to eliminate truncation/rounding errors. The increase of precision on doubling the number of subintervals is analysed, the relevant quadrature attribute being the precision increment. In order to exemplify the advantages that high-order quadrature afford, the technique is then used to evaluate several plots of the Rayleigh-Sommerfeld diffraction integral for axi-symmetric source fields defined on a planar aperture. A comparison of the high-order quadrature method against various FFT-based methods is finally given.

  19. High-performance Integrated numerical methods for Two-phase Flow in Heterogeneous Porous Media

    NASA Astrophysics Data System (ADS)

    Chueh, Chih-Che; Djilali, Ned; Bangerth, Wolfgang

    2010-11-01

    Modelling of two-phase flow in heterogeneous porous media has been playing a decisive role in a variety of areas. However, how to efficiently and accurately solve the governing equation in the flow in porous media remains a challenge. In order to ensure the accurate representative flow field and simultaneously increase the computational efficiency, we incorporate a number of state-of-the-art techniques into a numerical framework on which more complicated models in the field of multi-phase flow in porous media will be based. Such a numerical framework consists of a h-adaptive refinement method, an entropy-based artificial diffusive term, a new adaptive operator splitting method and efficient preconditioners. In particular, it is emphasized that we propose a new efficient adaptive operator splitting to avoid solving a time-consuming pressure-velocity part every saturation time step and, most importantly, we also provide a theoretically numerical analysis as well as proof. A few benchmarks will be demonstrated in the presentation.

  20. Conservation properties of numerical integration methods for systems of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Rosenbaum, J. S.

    1976-01-01

    If a system of ordinary differential equations represents a property conserving system that can be expressed linearly (e.g., conservation of mass), it is then desirable that the numerical integration method used conserve the same quantity. It is shown that both linear multistep methods and Runge-Kutta methods are 'conservative' and that Newton-type methods used to solve the implicit equations preserve the inherent conservation of the numerical method. It is further shown that a method used by several authors is not conservative.

  1. Melt-rock reaction in the asthenospheric mantle: Perspectives from high-order accurate numerical simulations in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Tirupathi, S.; Schiemenz, A. R.; Liang, Y.; Parmentier, E.; Hesthaven, J.

    2013-12-01

    The style and mode of melt migration in the mantle are important to the interpretation of basalts erupted on the surface. Both grain-scale diffuse porous flow and channelized melt migration have been proposed. To better understand the mechanisms and consequences of melt migration in a heterogeneous mantle, we have undertaken a numerical study of reactive dissolution in an upwelling and viscously deformable mantle where solubility of pyroxene increases upwards. Our setup is similar to that described in [1], except we use a larger domain size in 2D and 3D and a new numerical method. To enable efficient simulations in 3D through parallel computing, we developed a high-order accurate numerical method for the magma dynamics problem using discontinuous Galerkin methods and constructed the problem using the numerical library deal.II [2]. Linear stability analyses of the reactive dissolution problem reveal three dynamically distinct regimes [3] and the simulations reported in this study were run in the stable regime and the unstable wave regime where small perturbations in porosity grows periodically. The wave regime is more relevant to melt migration beneath the mid-ocean ridges but computationally more challenging. Extending the 2D simulations in the stable regime in [1] to 3D using various combinations of sustained perturbations in porosity at the base of the upwelling column (which may result from a viened mantle), we show the geometry and distribution of dunite channel and high-porosity melt channels are highly correlated with inflow perturbation through superposition. Strong nonlinear interactions among compaction, dissolution, and upwelling give rise to porosity waves and high-porosity melt channels in the wave regime. These compaction-dissolution waves have well organized but time-dependent structures in the lower part of the simulation domain. High-porosity melt channels nucleate along nodal lines of the porosity waves, growing downwards. The wavelength scales

  2. A comparison of the efficiency of numerical methods for integrating chemical kinetic rate equations

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    1984-01-01

    A comparison of the efficiency of several algorithms recently developed for the efficient numerical integration of stiff ordinary differential equations is presented. The methods examined include two general-purpose codes EPISODE and LSODE and three codes (CHEMEQ, CREK1D, and GCKP84) developed specifically to integrate chemical kinetic rate equations. The codes are applied to two test problems drawn from combustion kinetics. The comparisons show that LSODE is the fastest code currently available for the integration of combustion kinetic rate equations. An important finding is that an iterative solution of the algebraic energy conservation equation to compute the temperature can be more efficient than evaluating the temperature by integrating its time-derivative.

  3. Comparison of symbolic and numerical integration methods for an assumed-stress hybrid shell element

    NASA Technical Reports Server (NTRS)

    Rengarajan, Govind; Knight, Norman F., Jr.; Aminpour, Mohammad A.

    1993-01-01

    Hybrid shell elements have long been regarded with reserve by the commercial finite element developers despite the high degree of reliability and accuracy associated with such formulations. The fundamental reason is the inherent higher computational cost of the hybrid approach as compared to the displacement-based formulations. However, a noteworthy factor in favor of hybrid elements is that numerical integration to generate element matrices can be entirely avoided by the use of symbolic integration. In this paper, the use of the symbolic computational approach is presented for an assumed-stress hybrid shell element with drilling degrees of freedom and the significant time savings achieved is demonstrated through an example.

  4. NLOS UV channel modeling using numerical integration and an approximate closed-form path loss model

    NASA Astrophysics Data System (ADS)

    Gupta, Ankit; Noshad, Mohammad; Brandt-Pearce, Maïté

    2012-10-01

    In this paper we propose a simulation method using numerical integration, and develop a closed-form link loss model for physical layer channel characterization for non-line of sight (NLOS) ultraviolet (UV) communication systems. The impulse response of the channel is calculated by assuming both uniform and Gaussian profiles for transmitted beams and different geometries. The results are compared with previously published results. The accuracy of the integration approach is compared to the Monte Carlo simulation. Then the path loss using the simulation method and the suggested closed-form expression are presented for different link geometries. The accuracies are evaluated and compared to the results obtained using other methods.

  5. Numerical evaluation of two-center integrals over Slater type orbitals

    NASA Astrophysics Data System (ADS)

    Kurt, S. A.; Yükçü, N.

    2016-03-01

    Slater Type Orbitals (STOs) which one of the types of exponential type orbitals (ETOs) are used usually as basis functions in the multicenter molecular integrals to better understand physical and chemical properties of matter. In this work, we develop algorithms for two-center overlap and two-center two-electron hybrid and Coulomb integrals which are calculated with help of translation method for STOs and some auxiliary functions by V. Magnasco's group. We use Mathematica programming language to produce algorithms for these calculations. Numerical results for some quantum numbers are presented in the tables. Consequently, we compare our obtained numerical results with the other known literature results and other details of evaluation method are discussed.

  6. Extended RKN-type methods for numerical integration of perturbed oscillators

    NASA Astrophysics Data System (ADS)

    Yang, Hongli; Wu, Xinyuan; You, Xiong; Fang, Yonglei

    2009-10-01

    In this paper, extended Runge-Kutta-Nyström-type methods for the numerical integration of perturbed oscillators with low frequencies are presented, which inherit the framework of RKN methods and make full use of the special feature of the true flows for both the internal stages and the updates. Following the approach of J. Butcher, E. Hairer and G. Wanner, we develop a new kind of tree set to derive order conditions for the extended Runge-Kutta-Nyström-type methods. The numerical stability and phase properties of the new methods are analyzed. Numerical experiments are accompanied to show the applicability and efficiency of our new methods in comparison with some well-known high quality methods proposed in the scientific literature.

  7. Two step hybrid methods of 7th and 8th order for the numerical integration of second order IVPs

    NASA Astrophysics Data System (ADS)

    Kalogiratou, Z.; Monovasilis, Th.; Simos, T. E.

    2016-06-01

    In this work we consider the numerical integration of second order ODEs where the first derivative is missing. We construct two step hybrid methods with six and seven stages and seventh and eighth algebraic order. We apply the new methods on the numerical integration of several test problems.

  8. Some remarks on the numerical computation of integrals on an unbounded interval

    NASA Astrophysics Data System (ADS)

    Capobianco, M.; Criscuolo, G.

    2007-08-01

    An account of the error and the convergence theory is given for Gauss?Laguerre and Gauss?Radau?Laguerre quadrature formulae. We develop also truncated models of the original Gauss rules to compute integrals extended over the positive real axis. Numerical examples confirming the theoretical results are given comparing these rules among themselves and with different quadrature formulae proposed by other authors (Evans, Int. J. Comput. Math. 82:721?730, 2005; Gautschi, BIT 31:438?446, 1991).

  9. The Use of Phase-Lag Derivatives in the Numerical Integration of ODEs with Oscillating Solutions

    SciTech Connect

    Anastassi, Z. A.; Vlachos, D. S.; Simos, T. E.

    2008-09-01

    In this paper we consider the fitting of the coefficients of a numerical method, not only due to the nullification of the phase-lag, but also to its derivatives. We show that the method gains efficiency with each derivative of the phase-lag nullified for various problems with oscillating solutions. The analysis of the local truncation error analysis and the stability of the methods show the importance of zero phase-lag derivatives when integrating oscillatory differential equations.

  10. Analytical Solutions Using Integral Formulations and Their Coupling with Numerical Approaches.

    PubMed

    Morel-Seytoux, Hubert J

    2015-01-01

    Analytical and numerical approaches have their own distinct domains of merit and application. Unfortunately there has been a tendency to use either one or the other even when their domains overlap. Yet there is definite advantage in combining the two approaches. Being relatively new this emerging technique of combining the approaches is, at this stage, more of an art than a science. In this article we suggest approaches for the combination through simple examples. We also suggest that the integral formulation of the analytical problems may have some advantages over the differential formulation. The differential formulation limits somewhat the range of linear system descriptions that can be applied to a variety of practical problems. On the other hand the integral approach tends to focus attention to overall integrated behavior and properties of the system rather than on minute details. This is particularly useful in the coupling with a numerical model as in practice it generally deals also with only the integrated behavior of the system. The thesis of this article is illustrated with some simple stream-aquifer flow exchange examples. PMID:25213772

  11. DE 102 - A numerically integrated ephemeris of the moon and planets spanning forty-four centuries

    NASA Technical Reports Server (NTRS)

    Newhall, X. X.; Standish, E. M.; Willams, J. G.

    1983-01-01

    It is pointed out that the 1960's were the turning point for the generation of lunar and planetary ephemerides. All previous measurements of the positions of solar system bodies were optical angular measurements. New technological improvements leading to immense changes in observational accuracy are related to developments concerning radar, Viking landers on Mars, and laser ranges to lunar corner cube retroreflectors. Suitable numerical integration techniques and more comprehensive physical models were developed to match the accuracy of the modern data types. The present investigation is concerned with the first integrated ephemeris, DE 102, which covers the entire span of the historical astronomical observations of usable accuracy which are known. The fit is made to modern data. The integration spans the time period from 1411 BC to 3002 AD.

  12. Physical and numerical sources of computational inefficiency in integration of chemical kinetic rate equations: Etiology, treatment and prognosis

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.; Radhakrishnan, K.

    1986-01-01

    The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.

  13. Modified Chebyshev Picard Iteration for Efficient Numerical Integration of Ordinary Differential Equations

    NASA Astrophysics Data System (ADS)

    Macomber, B.; Woollands, R. M.; Probe, A.; Younes, A.; Bai, X.; Junkins, J.

    2013-09-01

    Modified Chebyshev Picard Iteration (MCPI) is an iterative numerical method for approximating solutions of linear or non-linear Ordinary Differential Equations (ODEs) to obtain time histories of system state trajectories. Unlike other step-by-step differential equation solvers, the Runge-Kutta family of numerical integrators for example, MCPI approximates long arcs of the state trajectory with an iterative path approximation approach, and is ideally suited to parallel computation. Orthogonal Chebyshev Polynomials are used as basis functions during each path iteration; the integrations of the Picard iteration are then done analytically. Due to the orthogonality of the Chebyshev basis functions, the least square approximations are computed without matrix inversion; the coefficients are computed robustly from discrete inner products. As a consequence of discrete sampling and weighting adopted for the inner product definition, Runge phenomena errors are minimized near the ends of the approximation intervals. The MCPI algorithm utilizes a vector-matrix framework for computational efficiency. Additionally, all Chebyshev coefficients and integrand function evaluations are independent, meaning they can be simultaneously computed in parallel for further decreased computational cost. Over an order of magnitude speedup from traditional methods is achieved in serial processing, and an additional order of magnitude is achievable in parallel architectures. This paper presents a new MCPI library, a modular toolset designed to allow MCPI to be easily applied to a wide variety of ODE systems. Library users will not have to concern themselves with the underlying mathematics behind the MCPI method. Inputs are the boundary conditions of the dynamical system, the integrand function governing system behavior, and the desired time interval of integration, and the output is a time history of the system states over the interval of interest. Examples from the field of astrodynamics are

  14. Numerical Solutions of Electromagnetic Problems by Integral Equation Methods and Finite-Difference Time - Method.

    NASA Astrophysics Data System (ADS)

    Min, Xiaoyi

    This thesis first presents the study of the interaction of electromagnetic waves with three-dimensional heterogeneous, dielectric, magnetic, and lossy bodies by surface integral equation modeling. Based on the equivalence principle, a set of coupled surface integral equations is formulated and then solved numerically by the method of moments. Triangular elements are used to model the interfaces of the heterogeneous body, and vector basis functions are defined to expand the unknown current in the formulation. The validity of this formulation is verified by applying it to concentric spheres for which an exact solution exists. The potential applications of this formulation to a partially coated sphere and a homogeneous human body are discussed. Next, this thesis also introduces an efficient new set of integral equations for treating the scattering problem of a perfectly conducting body coated with a thin magnetically lossy layer. These electric field integral equations and magnetic field integral equations are numerically solved by the method of moments (MoM). To validate the derived integral equations, an alternative method to solve the scattering problem of an infinite circular cylinder coated with a thin magnetic lossy layer has also been developed, based on the eigenmode expansion. Results for the radar cross section and current densities via the MoM and the eigenmode expansion method are compared. The agreement is excellent. The finite difference time domain method is subsequently implemented to solve a metallic object coated with a magnetic thin layer and numerical results are compared with that by the MoM. Finally, this thesis presents an application of the finite-difference time-domain approach to the problem of electromagnetic receiving and scattering by a cavity -backed antenna situated on an infinite conducting plane. This application involves modifications of Yee's model, which applies the difference approximations of field derivatives to differential

  15. CALL FOR PAPERS: Special Issue on `Geometric Numerical Integration of Differential Equations'

    NASA Astrophysics Data System (ADS)

    Quispel, G. R. W.; McLachlan, R. I.

    2005-02-01

    This is a call for contributions to a special issue of Journal of Physics A: Mathematical and General entitled `Geometric Numerical Integration of Differential Equations'. This issue should be a repository for high quality original work. We are interested in having the topic interpreted broadly, that is, to include contributions dealing with symplectic or multisymplectic integration; volume-preserving integration; symmetry-preserving integration; integrators that preserve first integrals, Lyapunov functions, or dissipation; exponential integrators; integrators for highly oscillatory systems; Lie-group integrators, etc. Papers on geometric integration of both ODEs and PDEs will be considered, as well as application to molecular-scale integration, celestial mechanics, particle accelerators, fluid flows, population models, epidemiological models and/or any other areas of science. We believe that this issue is timely, and hope that it will stimulate further development of this new and exciting field. The Editorial Board has invited G R W Quispel and R I McLachlan to serve as Guest Editors for the special issue. Their criteria for acceptance of contributions are the following: • The subject of the paper should relate to geometric numerical integration in the sense described above. • Contributions will be refereed and processed according to the usual procedure of the journal. • Papers should be original; reviews of a work published elsewhere will not be accepted. The guidelines for the preparation of contributions are as follows: • The DEADLINE for submission of contributions is 1 September 2005. This deadline will allow the special issue to appear in late 2005 or early 2006. • There is a strict page limit of 16 printed pages (approximately 9600 words) per contribution. For papers exceeding this limit, the Guest Editors reserve the right to request a reduction in length. Further advice on publishing your work in Journal of Physics A: Mathematical and General

  16. Sensitivity of inelastic response to numerical integration of strain energy. [for cantilever beam

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.

    1976-01-01

    The exact solution to the quasi-static, inelastic response of a cantilever beam of rectangular cross section subjected to a bending moment at the tip is obtained. The material of the beam is assumed to be linearly elastic-linearly strain-hardening. This solution is then compared with three different numerical solutions of the same problem obtained by minimizing the total potential energy using Gaussian quadratures of two different orders and a Newton-Cotes scheme for integrating the strain energy of deformation. Significant differences between the exact dissipative strain energy and its numerical counterpart are emphasized. The consequence of this on the nonlinear transient responses of a beam with solid cross section and that of a thin-walled beam on elastic supports under impulsive loads are examined.

  17. IAS15: a fast, adaptive, high-order integrator for gravitational dynamics, accurate to machine precision over a billion orbits

    NASA Astrophysics Data System (ADS)

    Rein, Hanno; Spiegel, David S.

    2015-01-01

    We present IAS15, a 15th-order integrator to simulate gravitational dynamics. The integrator is based on a Gauß-Radau quadrature and can handle conservative as well as non-conservative forces. We develop a step-size control that can automatically choose an optimal timestep. The algorithm can handle close encounters and high-eccentricity orbits. The systematic errors are kept well below machine precision, and long-term orbit integrations over 109 orbits show that IAS15 is optimal in the sense that it follows Brouwer's law, i.e. the energy error behaves like a random walk. Our tests show that IAS15 is superior to a mixed-variable symplectic integrator and other popular integrators, including high-order ones, in both speed and accuracy. In fact, IAS15 preserves the symplecticity of Hamiltonian systems better than the commonly used nominally symplectic integrators to which we compared it. We provide an open-source implementation of IAS15. The package comes with several easy-to-extend examples involving resonant planetary systems, Kozai-Lidov cycles, close encounters, radiation pressure, quadrupole moment and generic damping functions that can, among other things, be used to simulate planet-disc interactions. Other non-conservative forces can be added easily.

  18. Numerical and analytical tests of quasi-integrability in modified sine-Gordon models

    NASA Astrophysics Data System (ADS)

    Ferreira, L. A.; Zakrzewski, Wojtek J.

    2014-01-01

    Following our attempts to define quasi-integrability in which we related this concept to a particular symmetry of the two-soliton function we check this condition in three classes of modified sine-Gordon models in (1 + 1) dimensions. We find that the numerical results seen in various scatterings of two solitons and in the time evolution of breather-like structures support our ideas about the symmetry of the field configurations and its effects on the anomalies of the conservation laws of the charges.

  19. Time transformations and Cowell's method. [for numerical integration of satellite motion equations

    NASA Technical Reports Server (NTRS)

    Velez, C. E.; Hilinski, S.

    1978-01-01

    The precise numerical integration of Cowell's equations of satellite motion is frequently performed with an independent variable s defined by an equation of the form dt = cr to the n-th power ds, where t represents time, r the radial distance from the center of attraction, c is a constant, and n is a parameter. This has been primarily motivated by the 'uniformizing' effects of such a transformation resulting in desirable 'analytic' stepsize control for elliptical orbits. This report discusses the 'proper' choice of the parameter n defining the independent variable s for various types of orbits and perturbation models, and develops a criterion for its selection.

  20. Numerical Modeling of Pressurization of Cryogenic Propellant Tank for Integrated Vehicle Fluid System

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok K.; LeClair, Andre C.; Hedayat, Ali

    2016-01-01

    This paper presents a numerical model of pressurization of a cryogenic propellant tank for the Integrated Vehicle Fluid (IVF) system using the Generalized Fluid System Simulation Program (GFSSP). The IVF propulsion system, being developed by United Launch Alliance, uses boiloff propellants to drive thrusters for the reaction control system as well as to run internal combustion engines to develop power and drive compressors to pressurize propellant tanks. NASA Marshall Space Flight Center (MSFC) has been running tests to verify the functioning of the IVF system using a flight tank. GFSSP, a finite volume based flow network analysis software developed at MSFC, has been used to develop an integrated model of the tank and the pressurization system. This paper presents an iterative algorithm for converging the interface boundary conditions between different component models of a large system model. The model results have been compared with test data.

  1. Three-dimensional numerical modeling of photonic integration with dielectric-loaded SPP waveguides

    NASA Astrophysics Data System (ADS)

    Krasavin, A. V.; Zayats, A. V.

    2008-07-01

    Using full three-dimensional numerical modeling, we demonstrate highly efficient passive and active photonic circuit elements based on dielectric-loaded surface plasmon polariton waveguides (DLSPPWs). Highly confined surface plasmon polariton (SPP) mode having subwavelength cross section allows high level of integration of DLSPPW circuitry. We demonstrate very efficient guiding and routing of SPP signals with the passive waveguide elements such as bends, splitters, and Bragg reflectors, having a functional size of just a few microns at telecommunication wavelengths. Introducing a gain in the dielectric, we have found the requirement for lossless waveguiding and estimated the performance of DLSPPW lossless and active elements. DLSPPW based components have prospective implementation in photonic integrated chips, hybrid optical-electronic circuits, and lab-on-a-chip applications.

  2. Comparison of numerical techniques for integration of stiff ordinary differential equations arising in combustion chemistry

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    1984-01-01

    The efficiency and accuracy of several algorithms recently developed for the efficient numerical integration of stiff ordinary differential equations are compared. The methods examined include two general-purpose codes, EPISODE and LSODE, and three codes (CHEMEQ, CREK1D, and GCKP84) developed specifically to integrate chemical kinetic rate equations. The codes are applied to two test problems drawn from combustion kinetics. The comparisons show that LSODE is the fastest code currently available for the integration of combustion kinetic rate equations. An important finding is that an interactive solution of the algebraic energy conservation equation to compute the temperature does not result in significant errors. In addition, this method is more efficient than evaluating the temperature by integrating its time derivative. Significant reductions in computational work are realized by updating the rate constants (k = at(supra N) N exp(-E/RT) only when the temperature change exceeds an amount delta T that is problem dependent. An approximate expression for the automatic evaluation of delta T is derived and is shown to result in increased efficiency.

  3. Finite element numerical integration for first order approximations on multi- and many-core architectures

    NASA Astrophysics Data System (ADS)

    Banaś, Krzysztof; Krużel, Filip; Bielański, Jan

    2016-06-01

    The paper presents investigations on the implementation and performance of the finite element numerical integration algorithm for first order approximations and three processor architectures, popular in scientific computing, classical CPU, Intel Xeon Phi and NVIDIA Kepler GPU. A unifying programming model and portable OpenCL implementation is considered for all architectures. Variations of the algorithm due to different problems solved and different element types are investigated and several optimizations aimed at proper optimization and mapping of the algorithm to computer architectures are demonstrated. Performance models of execution are developed for different processors and tested in practical experiments. The results show the varying levels of performance for different architectures, but indicate that the algorithm can be effectively ported to all of them. The general conclusion is that the finite element numerical integration can achieve sufficient performance on different multi- and many-core architectures and should not become a performance bottleneck for finite element simulation codes. Specific observations lead to practical advises on how to optimize the kernels and what performance can be expected for the tested architectures.

  4. Response sensitivity analysis of the dynamic milling process based on the numerical integration method

    NASA Astrophysics Data System (ADS)

    Ding, Ye; Zhu, Limin; Zhang, Xiaojian; Ding, Han

    2012-09-01

    As one of the bases of gradient-based optimization algorithms, sensitivity analysis is usually required to calculate the derivatives of the system response with respect to the machining parameters. The most widely used approaches for sensitivity analysis are based on time-consuming numerical methods, such as finite difference methods. This paper presents a semi-analytical method for calculation of the sensitivity of the stability boundary in milling. After transforming the delay-differential equation with time-periodic coefficients governing the dynamic milling process into the integral form, the Floquet transition matrix is constructed by using the numerical integration method. Then, the analytical expressions of derivatives of the Floquet transition matrix with respect to the machining parameters are obtained. Thereafter, the classical analytical expression of the sensitivity of matrix eigenvalues is employed to calculate the sensitivity of the stability lobe diagram. The two-degree-of-freedom milling example illustrates the accuracy and efficiency of the proposed method. Compared with the existing methods, the unique merit of the proposed method is that it can be used for analytically computing the sensitivity of the stability boundary in milling, without employing any finite difference methods. Therefore, the high accuracy and high efficiency are both achieved. The proposed method can serve as an effective tool for machining parameter optimization and uncertainty analysis in high-speed milling.

  5. Application of Numerical Integration and Data Fusion in Unit Vector Method

    NASA Astrophysics Data System (ADS)

    Zhang, J.

    2012-01-01

    The Unit Vector Method (UVM) is a series of orbit determination methods which are designed by Purple Mountain Observatory (PMO) and have been applied extensively. It gets the conditional equations for different kinds of data by projecting the basic equation to different unit vectors, and it suits for weighted process for different kinds of data. The high-precision data can play a major role in orbit determination, and accuracy of orbit determination is improved obviously. The improved UVM (PUVM2) promoted the UVM from initial orbit determination to orbit improvement, and unified the initial orbit determination and orbit improvement dynamically. The precision and efficiency are improved further. In this thesis, further research work has been done based on the UVM: Firstly, for the improvement of methods and techniques for observation, the types and decision of the observational data are improved substantially, it is also asked to improve the decision of orbit determination. The analytical perturbation can not meet the requirement. So, the numerical integration for calculating the perturbation has been introduced into the UVM. The accuracy of dynamical model suits for the accuracy of the real data, and the condition equations of UVM are modified accordingly. The accuracy of orbit determination is improved further. Secondly, data fusion method has been introduced into the UVM. The convergence mechanism and the defect of weighted strategy have been made clear in original UVM. The problem has been solved in this method, the calculation of approximate state transition matrix is simplified and the weighted strategy has been improved for the data with different dimension and different precision. Results of orbit determination of simulation and real data show that the work of this thesis is effective: (1) After the numerical integration has been introduced into the UVM, the accuracy of orbit determination is improved obviously, and it suits for the high-accuracy data of

  6. Integrated Numerical Experiments (INEX) and the Free-Electron Laser Physical Process Code (FELPPC)

    SciTech Connect

    Thode, L.E.; Chan, K.C.D.; Schmitt, M.J.; McKee, J.; Ostic, J.; Elliott, C.J.; McVey, B.D.

    1990-01-01

    The strong coupling of subsystem elements, such as the accelerator, wiggler, and optics, greatly complicates the understanding and design of a free electron laser (FEL), even at the conceptual level. Given the requirements for high-performance FELs, the strong coupling between the laser subsystems must be included to obtain a realistic picture of the potential operational capability. To address the strong coupling character of the FEL the concept of an Integrated Numerical Experiment (INEX) was proposed. Unique features of the INEX approach are consistency and numerical equivalence of experimental diagnostics. The equivalent numerical diagnostics mitigates the major problem of misinterpretation that often occurs when theoretical and experimental data are compared. The INEX approach has been applied to a large number of accelerator and FEL experiments. Overall, the agreement between INEX and the experiments is very good. Despite the success of INEX, the approach is difficult to apply to trade-off and initial design studies because of the significant manpower and computational requirements. On the other hand, INEX provides a base from which realistic accelerator, wiggler, and optics models can be developed. The Free Electron Laser Physical Process Code (FELPPC) includes models developed from INEX, provides coupling between the subsystems models and incorporates application models relevant to a specific trade-off or design study.

  7. Integrated Numerical Experiments (INEX) and the Free-Electron Laser Physical Process Code (FELPPC)

    NASA Astrophysics Data System (ADS)

    Thode, L. E.; Chan, K. C. D.; Schmitt, M. J.; McKee, J.; Ostic, J.; Elliott, C. J.; McVey, B. D.

    The strong coupling of subsystem elements, such as the accelerator, wiggler, and optics, greatly complicates the understanding and design of a free electron laser (FEL), even at the conceptual level. Given the requirements for high-performance FELs, the strong coupling between the laser subsystems must be included to obtain a realistic picture of the potential operational capability. To address the strong coupling character of the FEL the concept of an Integrated Numerical Experiment (INEX) was proposed. Unique features of the INEX approach are consistency and numerical equivalence of experimental diagnostics. The equivalent numerical diagnostics mitigates the major problem of misinterpretation that often occurs when theoretical and experimental data are compared. The INEX approach has been applied to a large number of accelerator and FEL experiments. Overall, the agreement between INEX and the experiments is very good. Despite the success of INEX, the approach is difficult to apply to trade-off and initial design studies because of the significant manpower and computational requirements. On the other hand, INEX provides a base from which realistic accelerator, wiggler, and optics models can be developed. The Free Electron Laser Physical Process Code (FELPPC) includes models developed from INEX, provides coupling between the subsystems models and incorporates application models relevant to a specific trade-off or design study.

  8. Dynamics analysis of flexible mechanisms based on mixed numerical integration methods of Hilber-Huges-Taylor and Rossenbrock-Wanner

    SciTech Connect

    Ianakiev, A.; Esat, I.I.

    1995-09-01

    Numerical solution of dynamical systems with widely varying motion characteristics, such as relatively slow motion coupled with high frequency as it would be in flexible mechanisms, likely to pose problems. In this paper the mathematical model of a flexible mechanism is solved by using a mixed integration method that attempts to deal with the complexity of the coupled differential equations of the rigid-body and elastic motion. The mixed integration method consists of two integration methods (Rossenbrock-Wanner and Hilber-Hughes-Taylor methods) that have been combined in order to minimize the computational complexity required for the approximation of the real system. The a Hilber-Hughes-Taylor methods incorporates numerical damping that selectively affects only the higher modes of vibration. The improvement of the stability and the accuracy of the solution due to the numerical damping has been demonstrated via a numerical example that represents a stiff system. The example system was selected to contain a physically important low frequency and spurious highly frequency oscillations. The solution method filtered the high numerical oscillations from the response results. The Rossenbrock-Wanner integration technique was also presented. In this case it is also shown that fine adjustment of integration parameters could effect the degree of numerical damping. A mixed integration method, combination of the two found to give the best performance and accuracy in the case of stiff problems.

  9. Evaluation of 3 numerical methods for propulsion integration studies on transonic transport configurations

    NASA Technical Reports Server (NTRS)

    Yaros, S. F.; Carlson, J. R.; Chandrasekaran, B.

    1986-01-01

    An effort has been undertaken at the NASA Langley Research Center to assess the capabilities of available computational methods for use in propulsion integration design studies of transonic transport aircraft, particularly of pylon/nacelle combinations which exhibit essentially no interference drag. The three computer codes selected represent state-of-the-art computational methods for analyzing complex configurations at subsonic and transonic flight conditions. These are: EULER, a finitie volume solution of the Euler equation; VSAERO, a panel solution of the Laplace equation; and PPW, a finite difference solution of the small disturbance transonic equations. In general, all three codes have certain capabilities that allow them to be of some value in predicting the flows about transport configurations, but all have limitations. Until more accurate methods are available, careful application and interpretation of the results of these codes are needed.

  10. Numerical simulation of Stokes flow around particles via a hybrid Finite Difference-Boundary Integral scheme

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Amitabh

    2013-11-01

    An efficient algorithm for simulating Stokes flow around particles is presented here, in which a second order Finite Difference method (FDM) is coupled to a Boundary Integral method (BIM). This method utilizes the strong points of FDM (i.e. localized stencil) and BIM (i.e. accurate representation of particle surface). Specifically, in each iteration, the flow field away from the particles is solved on a Cartesian FDM grid, while the traction on the particle surface (given the the velocity of the particle) is solved using BIM. The two schemes are coupled by matching the solution in an intermediate region between the particle and surrounding fluid. We validate this method by solving for flow around an array of cylinders, and find good agreement with Hasimoto's (J. Fluid Mech. 1959) analytical results.

  11. Numerical analysis of composite STEEL-CONCRETE SECTIONS using integral equation of Volterra

    NASA Astrophysics Data System (ADS)

    Partov, Doncho; Kantchev, Vesselin

    2011-09-01

    The paper presents analysis of the stress and deflections changes due to creep in statically determinate composite steel-concrete beam. The mathematical model involves the equation of equilibrium, compatibility and constitutive relationship, i.e. an elastic law for the steel part and an integral-type creep law of Boltzmann — Volterra for the concrete part. On the basis of the theory of the viscoelastic body of Arutyunian-Trost-Bažant for determining the redistribution of stresses in beam section between concrete plate and steel beam with respect to time "t", two independent Volterra integral equations of the second kind have been derived. Numerical method based on linear approximation of the singular kernal function in the integral equation is presented. Example with the model proposed is investigated. The creep functions is suggested by the model CEB MC90-99 and the "ACI 209R-92 model. The elastic modulus of concrete Ec(t) is assumed to be constant in time `t'. The obtained results from the both models are compared.

  12. Numerical analysis of composite STEEL-CONCRETE SECTIONS using integral equation of Volterra

    NASA Astrophysics Data System (ADS)

    Partov, Doncho; Kantchev, Vesselin

    2011-09-01

    The paper presents analysis of the stress and deflections changes due to creep in statically determinate composite steel-concrete beam. The mathematical model involves the equation of equilibrium, compatibility and constitutive relationship, i.e. an elastic law for the steel part and an integral-type creep law of Boltzmann — Volterra for the concrete part. On the basis of the theory of the viscoelastic body of Arutyunian-Trost-Bažant for determining the redistribution of stresses in beam section between concrete plate and steel beam with respect to time "t", two independent Volterra integral equations of the second kind have been derived. Numerical method based on linear approximation of the singular kernal function in the integral equation is presented. Example with the model proposed is investigated. The creep functions is suggested by the model CEB MC90-99 and the "ACI 209R-92 model. The elastic modulus of concrete E c (t) is assumed to be constant in time `t'. The obtained results from the both models are compared.

  13. Integration of a silicon-based microprobe into a gear measuring instrument for accurate measurement of micro gears

    NASA Astrophysics Data System (ADS)

    Ferreira, N.; Krah, T.; Jeong, D. C.; Metz, D.; Kniel, K.; Dietzel, A.; Büttgenbach, S.; Härtig, F.

    2014-06-01

    The integration of silicon micro probing systems into conventional gear measuring instruments (GMIs) allows fully automated measurements of external involute micro spur gears of normal modules smaller than 1 mm. This system, based on a silicon microprobe, has been developed and manufactured at the Institute for Microtechnology of the Technische Universität Braunschweig. The microprobe consists of a silicon sensor element and a stylus which is oriented perpendicularly to the sensor. The sensor is fabricated by means of silicon bulk micromachining. Its small dimensions of 6.5 mm × 6.5 mm allow compact mounting in a cartridge to facilitate the integration into a GMI. In this way, tactile measurements of 3D microstructures can be realized. To enable three-dimensional measurements with marginal forces, four Wheatstone bridges are built with diffused piezoresistors on the membrane of the sensor. On the reverse of the membrane, the stylus is glued perpendicularly to the sensor on a boss to transmit the probing forces to the sensor element during measurements. Sphere diameters smaller than 300 µm and shaft lengths of 5 mm as well as measurement forces from 10 µN enable the measurements of 3D microstructures. Such micro probing systems can be integrated into universal coordinate measuring machines and also into GMIs to extend their field of application. Practical measurements were carried out at the Physikalisch-Technische Bundesanstalt by qualifying the microprobes on a calibrated reference sphere to determine their sensitivity and their physical dimensions in volume. Following that, profile and helix measurements were carried out on a gear measurement standard with a module of 1 mm. The comparison of the measurements shows good agreement between the measurement values and the calibrated values. This result is a promising basis for the realization of smaller probe diameters for the tactile measurement of micro gears with smaller modules.

  14. Direct hot slumping and accurate integration process to manufacture prototypal x-ray optical units made of glass

    NASA Astrophysics Data System (ADS)

    Civitani, M.; Ghigo, M.; Basso, S.; Proserpio, L.; Spiga, D.; Salmaso, B.; Pareschi, G.; Tagliaferri, G.; Burwitz, V.; Hartner, G.; Menz, B.; Bavdaz, M.; Wille, E.

    2013-09-01

    X-ray telescopes with very large collecting area, like the proposed International X-ray Observatory (IXO, with around 3 m2 at 1 keV), need to be composed of a large number high quality mirror segments, aiming at achieving an angular resolution better than 5 arcsec HEW (Half-Energy-Width). A possible technology to manufacture the modular elements that will compose the entire optical module, named X-ray Optical Units (XOUs), consists of stacking in Wolter-I configuration several layers of thin foils of borosilicate glass, previously formed by hot slumping. The XOUs are subsequently assembled to form complete multi-shell optics with Wolter-I geometry. The achievable global angular resolution of the optic relies on the required surface shape accuracy of slumped foils, on the smoothness of the mirror surfaces and on the correct integration and co-alignment of the mirror segments. The Brera Astronomical Observatory (INAF-OAB) is leading a study, supported by ESA, concerning the implementation of the IXO telescopes based on thin slumped glass foils. In addition to the opto-mechanical design, the study foresees the development of a direct hot slumping thin glass foils production technology. Moreover, an innovative assembly concept making use of Wolter-I counter-form moulds and glass reinforcing ribs is under development. The ribs connect pairs of consecutive foils in an XOU stack, playing a structural and a functional role. In fact, as the ribs constrain the foil profile to the correct shape during the bonding, they damp the low-frequency profile errors still present on the foil after slumping. A dedicated semirobotic Integration MAchine (IMA) has been realized to this scope and used to build a few integrated prototypes made of several layers of slumped plates. In this paper we provide an overview of the project, we report the results achieved so far, including full illumination intra-focus X-ray tests of the last integrated prototype that are compliant with a HEW of

  15. Erratum: ``Orbital Advection by Interpolation: A Fast and Accurate Numerical Scheme for Super-Fast MHD Flows'' (ApJ, 177, 373 [2008])

    SciTech Connect

    Johnson, B M; Guan, X; Gammie, C F

    2008-06-24

    The descriptions of some of the numerical tests in our original paper are incomplete, making reproduction of the results difficult. We provide the missing details here. The relevant tests are described in section 4 of the original paper (Figures 8-11).

  16. Simulation of Accurate Vibrationally Resolved Electronic Spectra: the Integrated Time-Dependent and Time-Independent Framework

    NASA Astrophysics Data System (ADS)

    Baiardi, Alberto; Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien

    2014-06-01

    Two parallel theories including Franck-Condon, Herzberg-Teller and Duschinsky (i.e., mode mixing) effects, allowing different approximations for the description of excited state PES have been developed in order to simulate realistic, asymmetric, electronic spectra line-shapes taking into account the vibrational structure: the so-called sum-over-states or time-independent (TI) method and the alternative time-dependent (TD) approach, which exploits the properties of the Fourier transform. The integrated TI-TD procedure included within a general purpose QM code [1,2], allows to compute one photon absorption, fluorescence, phosphorescence, electronic circular dichroism, circularly polarized luminescence and resonance Raman spectra. Combining both approaches, which use a single set of starting data, permits to profit from their respective advantages and minimize their respective limits: the time-dependent route automatically includes all vibrational states and, possibly, temperature effects, while the time-independent route allows to identify and assign single vibronic transitions. Interpretation, analysis and assignment of experimental spectra based on integrated TI-TD vibronic computations will be illustrated for challenging cases of medium-sized open-shell systems in the gas and condensed phases with inclusion of leading anharmonic effects. 1. V. Barone, A. Baiardi, M. Biczysko, J. Bloino, C. Cappelli, F. Lipparini Phys. Chem. Chem. Phys, 14, 12404, (2012) 2. A. Baiardi, V. Barone, J. Bloino J. Chem. Theory Comput., 9, 4097-4115 (2013)

  17. Numerical evaluation of the Feynman integral-over-paths in real and imaginary-time

    NASA Astrophysics Data System (ADS)

    Register, L. F.; Stroscio, M. A.; Littlejohn, M. A.

    New techniques are described for Monte Carlo evaluation of the propagation of quantum mechanical systems in both real and imaginary-time using the Feynman integral-over-paths formulation of quantum mechanics. For imaginary-time calculations path translation is used to augment the technique of Lawande et. al. This simple-yet-powerful technique allows the equilibrium probability density to be accurately evaluated in the presence of multiple potential wells. It is shown that path translation permits the calculation of the unknown ground-state energy of one confining potential by comparison with the known ground-state energy of another. A double finite-square-well potential and a finite-square-well/parabolic-well pair are presented as examples. For real-time calculations, a weighted analytical averaging of the exponential in the classical action is performed over a region of paths. This "windowed action" has both real and imaginary components. The imaginary component yields an exponentially decaying probability for selecting paths, thereby providing a basis for the Monte Carlo evaluation of the real-time integral-over-paths. Examples of a wave-packet in a parabolic well and a wave-packet impinging upon a potential barrier are considered.

  18. Intra-Auditory Integration Improves Motor Performance and Synergy in an Accurate Multi-Finger Pressing Task

    PubMed Central

    Koh, Kyung; Kwon, Hyun Joon; Park, Yang Sun; Kiemel, Tim; Miller, Ross H.; Kim, Yoon Hyuk; Shin, Joon-Ho; Shim, Jae Kun

    2016-01-01

    Humans detect changes in the air pressure and understand the surroundings through the auditory system. The sound humans perceive is composed of two distinct physical properties, frequency and intensity. However, our knowledge is limited how the brain perceives and combines these two properties simultaneously (i.e., intra-auditory integration), especially in relation to motor behaviors. Here, we investigated the effect of intra-auditory integration between the frequency and intensity components of auditory feedback on motor outputs in a constant finger-force production task. The hierarchical variability decomposition model previously developed was used to decompose motor performance into mathematically independent components each of which quantifies a distinct motor behavior such as consistency, repeatability, systematic error, within-trial synergy, or between-trial synergy. We hypothesized that feedback on two components of sound as a function of motor performance (frequency and intensity) would improve motor performance and multi-finger synergy compared to feedback on just one component (frequency or intensity). Subjects were instructed to match the reference force of 18 N with the sum of all finger forces (virtual finger or VF force) while listening to auditory feedback of their accuracy. Three experimental conditions were used: (i) condition F, where frequency changed; (ii) condition I, where intensity changed; (iii) condition FI, where both frequency and intensity changed. Motor performance was enhanced for the FI conditions as compared to either the F or I condition alone. The enhancement of motor performance was achieved mainly by the improved consistency and repeatability. However, the systematic error remained unchanged across conditions. Within- and between-trial synergies were also improved for the FI condition as compared to either the F or I condition alone. However, variability of individual finger forces for the FI condition was not significantly

  19. Intra-Auditory Integration Improves Motor Performance and Synergy in an Accurate Multi-Finger Pressing Task.

    PubMed

    Koh, Kyung; Kwon, Hyun Joon; Park, Yang Sun; Kiemel, Tim; Miller, Ross H; Kim, Yoon Hyuk; Shin, Joon-Ho; Shim, Jae Kun

    2016-01-01

    Humans detect changes in the air pressure and understand the surroundings through the auditory system. The sound humans perceive is composed of two distinct physical properties, frequency and intensity. However, our knowledge is limited how the brain perceives and combines these two properties simultaneously (i.e., intra-auditory integration), especially in relation to motor behaviors. Here, we investigated the effect of intra-auditory integration between the frequency and intensity components of auditory feedback on motor outputs in a constant finger-force production task. The hierarchical variability decomposition model previously developed was used to decompose motor performance into mathematically independent components each of which quantifies a distinct motor behavior such as consistency, repeatability, systematic error, within-trial synergy, or between-trial synergy. We hypothesized that feedback on two components of sound as a function of motor performance (frequency and intensity) would improve motor performance and multi-finger synergy compared to feedback on just one component (frequency or intensity). Subjects were instructed to match the reference force of 18 N with the sum of all finger forces (virtual finger or VF force) while listening to auditory feedback of their accuracy. Three experimental conditions were used: (i) condition F, where frequency changed; (ii) condition I, where intensity changed; (iii) condition FI, where both frequency and intensity changed. Motor performance was enhanced for the FI conditions as compared to either the F or I condition alone. The enhancement of motor performance was achieved mainly by the improved consistency and repeatability. However, the systematic error remained unchanged across conditions. Within- and between-trial synergies were also improved for the FI condition as compared to either the F or I condition alone. However, variability of individual finger forces for the FI condition was not significantly

  20. Numerical computation of unsteady laminar boundary layers with separation using two-parameter integral method

    NASA Astrophysics Data System (ADS)

    Akamatsu, T.; Matsushita, M.; Murata, S.

    1985-11-01

    A two-parameter integral method is presented which is applicable even to separated boundary layers. The governing equation system, which consists of three moment equations of the boundary layer equation, is shown to be classifiable as a quasi-linear hyperbolic system under the assumed velocity profile function. The governing system is numerically solved by a dissipative finite difference scheme in order to capture a discontinuous solution associated with the singularity of unsteady separation. The spontaneous generation of singularity associated with unsteady separation is confirmed as the focusing of characteristics. The starting flows of a circular and an elliptic cylinder are considered as definite examples. This method is found to give excellent results in comparison with exact methods, not only for practically important boundary layer quantities such as displacement thickness or skin friction coefficient, but also for generation of separation singularity.

  1. An integrated data-directed numerical method for estimating the undiscovered mineral endowment in a region

    USGS Publications Warehouse

    McCammon, R.B.; Finch, W.I.; Kork, J.O.; Bridges, N.J.

    1994-01-01

    An integrated data-directed numerical method has been developed to estimate the undiscovered mineral endowment within a given area. The method has been used to estimate the undiscovered uranium endowment in the San Juan Basin, New Mexico, U.S.A. The favorability of uranium concentration was evaluated in each of 2,068 cells defined within the Basin. Favorability was based on the correlated similarity of the geologic characteristics of each cell to the geologic characteristics of five area-related deposit models. Estimates of the undiscovered endowment for each cell were categorized according to deposit type, depth, and cutoff grade. The method can be applied to any mineral or energy commodity provided that the data collected reflect discovered endowment. ?? 1994 Oxford University Press.

  2. A prefiltering version of the Kalman filter with new numerical integration formulas for Riccati equations

    NASA Technical Reports Server (NTRS)

    Womble, M. E.; Potter, J. E.

    1975-01-01

    A prefiltering version of the Kalman filter is derived for both discrete and continuous measurements. The derivation consists of determining a single discrete measurement that is equivalent to either a time segment of continuous measurements or a set of discrete measurements. This prefiltering version of the Kalman filter easily handles numerical problems associated with rapid transients and ill-conditioned Riccati matrices. Therefore, the derived technique for extrapolating the Riccati matrix from one time to the next constitutes a new set of integration formulas which alleviate ill-conditioning problems associated with continuous Riccati equations. Furthermore, since a time segment of continuous measurements is converted into a single discrete measurement, Potter's square root formulas can be used to update the state estimate and its error covariance matrix. Therefore, if having the state estimate and its error covariance matrix at discrete times is acceptable, the prefilter extends square root filtering with all its advantages, to continuous measurement problems.

  3. Inelastic, nonlinear analysis of stiffened shells of revolution by numerical integration

    NASA Technical Reports Server (NTRS)

    Levine, H. S.; Svalbonas, V.

    1974-01-01

    This paper describes the latest addition to the STARS system of computer programs, STARS-2P, for the plastic, large deflection analysis of axisymmetrically loaded shells of revolution. The STARS system uses a numerical integration scheme to solve the governing differential equations. Several unique features for shell of revolution programs that are included in the STARS-2P program are described. These include orthotropic nonlinear kinematic hardening theory, a variety of shell wall cross sections and discrete ring stiffeners, cyclic and nonproportional mechanical and thermal loading capability, the coupled axisymmetric large deflection elasto-plastic torsion problem, an extensive restart option, arbitrary branching capability, and the provision for the inelastic treatment of smeared stiffeners, isogrid, and waffle wall constructions. To affirm the validity of the results, comparisons with available theoretical and experimental data are presented.

  4. Balancing an accurate representation of the molecular surface in generalized born formalisms with integrator stability in molecular dynamics simulations.

    PubMed

    Chocholousová, Jana; Feig, Michael

    2006-04-30

    Different integrator time steps in NVT and NVE simulations of protein and nucleic acid systems are tested with the GBMV (Generalized Born using Molecular Volume) and GBSW (Generalized Born with simple SWitching) methods. The simulation stability and energy conservation is investigated in relation to the agreement with the Poisson theory. It is found that very close agreement between generalized Born methods and the Poisson theory based on the commonly used sharp molecular surface definition results in energy drift and simulation artifacts in molecular dynamics simulation protocols with standard 2-fs time steps. New parameters are proposed for the GBMV method, which maintains very good agreement with the Poisson theory while providing energy conservation and stable simulations at time steps of 1 to 1.5 fs. PMID:16518883

  5. Orbit determination based on meteor observations using numerical integration of equations of motion

    NASA Astrophysics Data System (ADS)

    Dmitriev, V.; Lupovka, V.; Gritsevich, M.

    2014-07-01

    We review the definitions and approaches to orbital-characteristics analysis applied to photographic or video ground-based observations of meteors. A number of camera networks dedicated to meteors registration were established all over the word, including USA, Canada, Central Europe, Australia, Spain, Finland and Poland. Many of these networks are currently operational. The meteor observations are conducted from different locations hosting the network stations. Each station is equipped with at least one camera for continuous monitoring of the firmament (except possible weather restrictions). For registered multi-station meteors, it is possible to accurately determine the direction and absolute value for the meteor velocity and thus obtain the topocentric radiant. Based on topocentric radiant one further determines the heliocentric meteor orbit. We aim to reduce total uncertainty in our orbit-determination technique, keeping it even less than the accuracy of observations. The additional corrections for the zenith attraction are widely in use and are implemented, for example, here [1]. We propose a technique for meteor-orbit determination with higher accuracy. We transform the topocentric radiant in inertial (J2000) coordinate system using the model recommended by IAU [2]. The main difference if compared to the existing orbit-determination techniques is integration of ordinary differential equations of motion instead of addition correction in visible velocity for zenith attraction. The attraction of the central body (the Sun), the perturbations by Earth, Moon and other planets of the Solar System, the Earth's flattening (important in the initial moment of integration, i.e. at the moment when a meteoroid enters the atmosphere), atmospheric drag may be optionally included in the equations. In addition, reverse integration of the same equations can be performed to analyze orbital evolution preceding to meteoroid's collision with Earth. To demonstrate the developed

  6. A multiple hypotheses uncertainty analysis in hydrological modelling: about model structure, landscape parameterization, and numerical integration

    NASA Astrophysics Data System (ADS)

    Pilz, Tobias; Francke, Till; Bronstert, Axel

    2016-04-01

    Until today a large number of competing computer models has been developed to understand hydrological processes and to simulate and predict streamflow dynamics of rivers. This is primarily the result of a lack of a unified theory in catchment hydrology due to insufficient process understanding and uncertainties related to model development and application. Therefore, the goal of this study is to analyze the uncertainty structure of a process-based hydrological catchment model employing a multiple hypotheses approach. The study focuses on three major problems that have received only little attention in previous investigations. First, to estimate the impact of model structural uncertainty by employing several alternative representations for each simulated process. Second, explore the influence of landscape discretization and parameterization from multiple datasets and user decisions. Third, employ several numerical solvers for the integration of the governing ordinary differential equations to study the effect on simulation results. The generated ensemble of model hypotheses is then analyzed and the three sources of uncertainty compared against each other. To ensure consistency and comparability all model structures and numerical solvers are implemented within a single simulation environment. First results suggest that the selection of a sophisticated numerical solver for the differential equations positively affects simulation outcomes. However, already some simple and easy to implement explicit methods perform surprisingly well and need less computational efforts than more advanced but time consuming implicit techniques. There is general evidence that ambiguous and subjective user decisions form a major source of uncertainty and can greatly influence model development and application at all stages.

  7. Toward Fast and Accurate Evaluation of Charge On-Site Energies and Transfer Integrals in Supramolecular Architectures Using Linear Constrained Density Functional Theory (CDFT)-Based Methods.

    PubMed

    Ratcliff, Laura E; Grisanti, Luca; Genovese, Luigi; Deutsch, Thierry; Neumann, Tobias; Danilov, Denis; Wenzel, Wolfgang; Beljonne, David; Cornil, Jérôme

    2015-05-12

    A fast and accurate scheme has been developed to evaluate two key molecular parameters (on-site energies and transfer integrals) that govern charge transport in organic supramolecular architecture devices. The scheme is based on a constrained density functional theory (CDFT) approach implemented in the linear-scaling BigDFT code that exploits a wavelet basis set. The method has been applied to model disordered structures generated by force-field simulations. The role of the environment on the transport parameters has been taken into account by building large clusters around the active molecules involved in the charge transfer. PMID:26574411

  8. Numerical optimization of integrating cavities for diffraction-limited millimeter-wave bolometer arrays.

    PubMed

    Glenn, Jason; Chattopadhyay, Goutam; Edgington, Samantha F; Lange, Andrew E; Bock, James J; Mauskopf, Philip D; Lee, Adrian T

    2002-01-01

    Far-infrared to millimeter-wave bolometers designed to make astronomical observations are typically encased in integrating cavities at the termination of feedhorns or Winston cones. This photometer combination maximizes absorption of radiation, enables the absorber area to be minimized, and controls the directivity of absorption, thereby reducing susceptibility to stray light. In the next decade, arrays of hundreds of silicon nitride micromesh bolometers with planar architectures will be used in ground-based, suborbital, and orbital platforms for astronomy. The optimization of integrating cavity designs is required for achieving the highest possible sensitivity for these arrays. We report numerical simulations of the electromagnetic fields in integrating cavities with an infinite plane-parallel geometry formed by a solid reflecting backshort and the back surface of a feedhorn array block. Performance of this architecture for the bolometer array camera (Bolocam) for cosmology at a frequency of 214 GHz is investigated. We explore the sensitivity of absorption efficiency to absorber impedance and backshort location and the magnitude of leakage from cavities. The simulations are compared with experimental data from a room-temperature scale model and with the performance of Bolocam at a temperature of 300 mK. The main results of the simulations for Bolocam-type cavities are that (1) monochromatic absorptions as high as 95% are achievable with <1% cross talk between neighboring cavities, (2) the optimum absorber impedances are 400 ohms/sq, but with a broad maximum from approximately 150 to approximately 700 ohms/sq, and (3) maximum absorption is achieved with absorber diameters > or = 1.5 lambda. Good general agreement between the simulations and the experiments was found. PMID:11900429

  9. Numerical Modeling of the Chilldown of Cryogenic Transfer Lines Using a Sinda/GFSSP Integrated Solver

    NASA Technical Reports Server (NTRS)

    LeClair, Andre

    2011-01-01

    An important first step in cryogenic propellant loading is the chilldown of transfer lines. During the chilldown of the transfer line, the flow is two-phase and unsteady, with solid to fluid heat transfer and therefore a coupled thermo-fluid analysis is necessary to model the system. This paper describes a numerical model of pipe chilldown that utilizes the Sinda/GFSSP Conjugate Integrator (SGCI). SGCI is a new analysis tool developed at NASA's Marshall Space Flight Center (MSFC). SGCI facilitates the solution of thermofluid problems in interconnected solid-fluid systems. The solid component of the system is modeled in MSC Patran and translated into an MSC Sinda thermal network model. The fluid component is modeled in GFSSP, the Generalized Fluid System Simulation Program. GFSSP is a general network flow solver developed at NASA/MSFC. GFSSP uses a finite-volume approach to model fluid systems that can include phase change, multiple species, fluid transients, and heat transfer to simple solid networks. SGCI combines the GFSSP Fortran code with the Sinda input file and compiles the integrated model. Sinda solves for the temperatures of the solid network, while GFSSP simultaneously solves the fluid network for pressure, temperature, and flow rate. The two networks are coupled by convection heat transfer from the solid wall to the cryogenic fluid. The model presented here is based on a series of experiments conducted in 1966 by the National Bureau of Standards (NBS). A vacuum-jacketed, 200 ft copper transfer line was chilled by liquid nitrogen and liquid hydrogen. The predictions of transient temperature profiles and chilldown time of the integrated Sinda/GFSSP model will be compared to the experimental measurements.

  10. Evidence that bisphenol A (BPA) can be accurately measured without contamination in human serum and urine, and that BPA causes numerous hazards from multiple routes of exposure

    PubMed Central

    vom Saal, Frederick S.; Welshons, Wade V.

    2016-01-01

    There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources. PMID:25304273

  11. Evidence that bisphenol A (BPA) can be accurately measured without contamination in human serum and urine, and that BPA causes numerous hazards from multiple routes of exposure.

    PubMed

    vom Saal, Frederick S; Welshons, Wade V

    2014-12-01

    There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources. PMID:25304273

  12. A computationally efficient and accurate numerical representation of thermodynamic properties of steam and water for computations of non-equilibrium condensing steam flow in steam turbines

    NASA Astrophysics Data System (ADS)

    Hrubý, Jan

    2012-04-01

    Mathematical modeling of the non-equilibrium condensing transonic steam flow in the complex 3D geometry of a steam turbine is a demanding problem both concerning the physical concepts and the required computational power. Available accurate formulations of steam properties IAPWS-95 and IAPWS-IF97 require much computation time. For this reason, the modelers often accept the unrealistic ideal-gas behavior. Here we present a computation scheme based on a piecewise, thermodynamically consistent representation of the IAPWS-95 formulation. Density and internal energy are chosen as independent variables to avoid variable transformations and iterations. On the contrary to the previous Tabular Taylor Series Expansion Method, the pressure and temperature are continuous functions of the independent variables, which is a desirable property for the solution of the differential equations of the mass, energy, and momentum conservation for both phases.

  13. Integrative structural annotation of de novo RNA-Seq provides an accurate reference gene set of the enormous genome of the onion (Allium cepa L.)

    PubMed Central

    Kim, Seungill; Kim, Myung-Shin; Kim, Yong-Min; Yeom, Seon-In; Cheong, Kyeongchae; Kim, Ki-Tae; Jeon, Jongbum; Kim, Sunggil; Kim, Do-Sun; Sohn, Seong-Han; Lee, Yong-Hwan; Choi, Doil

    2015-01-01

    The onion (Allium cepa L.) is one of the most widely cultivated and consumed vegetable crops in the world. Although a considerable amount of onion transcriptome data has been deposited into public databases, the sequences of the protein-coding genes are not accurate enough to be used, owing to non-coding sequences intermixed with the coding sequences. We generated a high-quality, annotated onion transcriptome from de novo sequence assembly and intensive structural annotation using the integrated structural gene annotation pipeline (ISGAP), which identified 54,165 protein-coding genes among 165,179 assembled transcripts totalling 203.0 Mb by eliminating the intron sequences. ISGAP performed reliable annotation, recognizing accurate gene structures based on reference proteins, and ab initio gene models of the assembled transcripts. Integrative functional annotation and gene-based SNP analysis revealed a whole biological repertoire of genes and transcriptomic variation in the onion. The method developed in this study provides a powerful tool for the construction of reference gene sets for organisms based solely on de novo transcriptome data. Furthermore, the reference genes and their variation described here for the onion represent essential tools for molecular breeding and gene cloning in Allium spp. PMID:25362073

  14. Method for more accurate transmittance measurements of low-angle scattering samples using an integrating sphere with an entry port beam diffuser

    SciTech Connect

    Nilsson, Annica M.; Jonsson, Andreas; Jonsson, Jacob C.; Roos, Arne

    2011-03-01

    For most integrating sphere measurements, the difference in light distribution between a specular reference beam and a diffused sample beam can result in significant errors. The problem becomes especially pronounced in integrating spheres that include a port for reflectance or diffuse transmittance measurements. The port is included in many standard spectrophotometers to facilitate a multipurpose instrument, however, absorption around the port edge can result in a detected signal that is too low. The absorption effect is especially apparent for low-angle scattering samples, because a significant portion of the light is scattered directly onto that edge. In this paper, a method for more accurate transmittance measurements of low-angle light-scattering samples is presented. The method uses a standard integrating sphere spectrophotometer, and the problem with increased absorption around the port edge is addressed by introducing a diffuser between the sample and the integrating sphere during both reference and sample scan. This reduces the discrepancy between the two scans and spreads the scattered light over a greater portion of the sphere wall. The problem with multiple reflections between the sample and diffuser is successfully addressed using a correction factor. The method is tested for two patterned glass samples with low-angle scattering and in both cases the transmittance accuracy is significantly improved.

  15. Application of variational principles and adjoint integrating factors for constructing numerical GFD models

    NASA Astrophysics Data System (ADS)

    Penenko, Vladimir; Tsvetova, Elena; Penenko, Alexey

    2015-04-01

    The proposed method is considered on an example of hydrothermodynamics and atmospheric chemistry models [1,2]. In the development of the existing methods for constructing numerical schemes possessing the properties of total approximation for operators of multiscale process models, we have developed a new variational technique, which uses the concept of adjoint integrating factors. The technique is as follows. First, a basic functional of the variational principle (the integral identity that unites the model equations, initial and boundary conditions) is transformed using Lagrange's identity and the second Green's formula. As a result, the action of the operators of main problem in the space of state functions is transferred to the adjoint operators defined in the space of sufficiently smooth adjoint functions. By the choice of adjoint functions the order of the derivatives becomes lower by one than those in the original equations. We obtain a set of new balance relationships that take into account the sources and boundary conditions. Next, we introduce the decomposition of the model domain into a set of finite volumes. For multi-dimensional non-stationary problems, this technique is applied in the framework of the variational principle and schemes of decomposition and splitting on the set of physical processes for each coordinate directions successively at each time step. For each direction within the finite volume, the analytical solutions of one-dimensional homogeneous adjoint equations are constructed. In this case, the solutions of adjoint equations serve as integrating factors. The results are the hybrid discrete-analytical schemes. They have the properties of stability, approximation and unconditional monotony for convection-diffusion operators. These schemes are discrete in time and analytic in the spatial variables. They are exact in case of piecewise-constant coefficients within the finite volume and along the coordinate lines of the grid area in each

  16. Integrating laboratory creep compaction data with numerical fault models: A Bayesian framework

    USGS Publications Warehouse

    Fitzenz, D.D.; Jalobeanu, A.; Hickman, S.H.

    2007-01-01

    We developed a robust Bayesian inversion scheme to plan and analyze laboratory creep compaction experiments. We chose a simple creep law that features the main parameters of interest when trying to identify rate-controlling mechanisms from experimental data. By integrating the chosen creep law or an approximation thereof, one can use all the data, either simultaneously or in overlapping subsets, thus making more complete use of the experiment data and propagating statistical variations in the data through to the final rate constants. Despite the nonlinearity of the problem, with this technique one can retrieve accurate estimates of both the stress exponent and the activation energy, even when the porosity time series data are noisy. Whereas adding observation points and/or experiments reduces the uncertainty on all parameters, enlarging the range of temperature or effective stress significantly reduces the covariance between stress exponent and activation energy. We apply this methodology to hydrothermal creep compaction data on quartz to obtain a quantitative, semiempirical law for fault zone compaction in the interseismic period. Incorporating this law into a simple direct rupture model, we find marginal distributions of the time to failure that are robust with respect to errors in the initial fault zone porosity. Copyright 2007 by the American Geophysical Union.

  17. Integrating a Gravity Simulation and Groundwater Numerical Modeling on the Calibration of Specific Yield for Choshui Alluvial Fan

    NASA Astrophysics Data System (ADS)

    Hsu, C. Y.

    2014-12-01

    In Taiwan, groundwater resources play a vital role on the regional supply management. Because the groundwater resources have been used without proper management in decades, several kinds of natural hazards, such as land subsidence, have been occurred. The Choshui alluvial fan is one of the hot spots in Taiwan. For sustainable management, accurately estimation of recharge is the most important information. The accuracy is highly related to the uncertainty of specific yield (Sy). Besides, because the value of Sy should be tested via a multi-well pumping test, the installation cost for the multi-well system limits the number of field tests. Therefore, the low spatial density of field test for Sy makes the estimation of recharge contains high uncertainty. The proposed method combines MODFLOW with a numerical integration procedure that calculates the gravity variations. Heterogeneous parameters (Sy) can be assigned to MODFLOW cells. An inverse procedure is then applied to interpret and identify the Sy value around the gravity station. The proposed methodology is applied to the Choshui alluvial fan, one of the most important groundwater basins in Taiwan. Three gravity measurement stations, "GS01", "GS02" and "GS03", were established. The location of GS01 is in the neighborhood of a groundwater observation well where pumping test data are available. The Sy value estimated from the gravitation measurements collected from GS01 compares favorably with that obtained from the traditional pumping test. The comparison verifies the correctness and accuracy of the proposed method. We then use the gravity measurements collected from GS02 and GS03 to estimate the Sy values in the areas where there exist no pumping test data. Using the estimated values obtained from gravity measurements, the spatial distribution of the values of specific yield for the aquifer can be further refined. The proposed method is a cost-saving and accuracy alternative for the estimation of specific yield in

  18. Integration of Genetic and Phenotypic Data in 48 Lineages of Philippine Birds Shows Heterogeneous Divergence Processes and Numerous Cryptic Species

    PubMed Central

    Campbell, Kyle K.; Braile, Thomas

    2016-01-01

    The Philippine Islands are one of the most biologically diverse archipelagoes in the world. Current taxonomy, however, may underestimate levels of avian diversity and endemism in these islands. Although species limits can be difficult to determine among allopatric populations, quantitative methods for comparing phenotypic and genotypic data can provide useful metrics of divergence among populations and identify those that merit consideration for elevation to full species status. Using a conceptual approach that integrates genetic and phenotypic data, we compared populations among 48 species, estimating genetic divergence (p-distance) using the mtDNA marker ND2 and comparing plumage and morphometrics of museum study skins. Using conservative speciation thresholds, pairwise comparisons of genetic and phenotypic divergence suggested possible species-level divergences in more than half of the species studied (25 out of 48). In speciation process space, divergence routes were heterogeneous among taxa. Nearly all populations that surpassed high genotypic divergence thresholds were Passeriformes, and non-Passeriformes populations surpassed high phenotypic divergence thresholds more commonly than expected by chance. Overall, there was an apparent logarithmic increase in phenotypic divergence with respect to genetic divergence, suggesting the possibility that divergence among these lineages may initially be driven by divergent selection in this allopatric system. Also, genetic endemism was high among sampled islands. Higher taxonomy affected divergence in genotype and phenotype. Although broader lineage, genetic, phenotypic, and numeric sampling is needed to further explore heterogeneity among divergence processes and to accurately assess species-level diversity in these taxa, our results support the need for substantial taxonomic revisions among Philippine birds. The conservation implications are profound. PMID:27442510

  19. Integration of Genetic and Phenotypic Data in 48 Lineages of Philippine Birds Shows Heterogeneous Divergence Processes and Numerous Cryptic Species.

    PubMed

    Campbell, Kyle K; Braile, Thomas; Winker, Kevin

    2016-01-01

    The Philippine Islands are one of the most biologically diverse archipelagoes in the world. Current taxonomy, however, may underestimate levels of avian diversity and endemism in these islands. Although species limits can be difficult to determine among allopatric populations, quantitative methods for comparing phenotypic and genotypic data can provide useful metrics of divergence among populations and identify those that merit consideration for elevation to full species status. Using a conceptual approach that integrates genetic and phenotypic data, we compared populations among 48 species, estimating genetic divergence (p-distance) using the mtDNA marker ND2 and comparing plumage and morphometrics of museum study skins. Using conservative speciation thresholds, pairwise comparisons of genetic and phenotypic divergence suggested possible species-level divergences in more than half of the species studied (25 out of 48). In speciation process space, divergence routes were heterogeneous among taxa. Nearly all populations that surpassed high genotypic divergence thresholds were Passeriformes, and non-Passeriformes populations surpassed high phenotypic divergence thresholds more commonly than expected by chance. Overall, there was an apparent logarithmic increase in phenotypic divergence with respect to genetic divergence, suggesting the possibility that divergence among these lineages may initially be driven by divergent selection in this allopatric system. Also, genetic endemism was high among sampled islands. Higher taxonomy affected divergence in genotype and phenotype. Although broader lineage, genetic, phenotypic, and numeric sampling is needed to further explore heterogeneity among divergence processes and to accurately assess species-level diversity in these taxa, our results support the need for substantial taxonomic revisions among Philippine birds. The conservation implications are profound. PMID:27442510

  20. Science-Based Approach for Advancing Marine and Hydrokinetic Energy: Integrating Numerical Simulations with Experiments

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, F.; Kang, S.; Chamorro, L. P.; Hill, C.

    2011-12-01

    The field of MHK energy is still in its infancy lagging approximately a decade or more behind the technology and development progress made in wind energy engineering. Marine environments are characterized by complex topography and three-dimensional (3D) turbulent flows, which can greatly affect the performance and structural integrity of MHK devices and impact the Levelized Cost of Energy (LCoE). Since the deployment of multi-turbine arrays is envisioned for field applications, turbine-to-turbine interactions and turbine-bathymetry interactions need to be understood and properly modeled so that MHK arrays can be optimized on a site specific basis. Furthermore, turbulence induced by MHK turbines alters and interacts with the nearby ecosystem and could potentially impact aquatic habitats. Increased turbulence in the wake of MHK devices can also change the shear stress imposed on the bed ultimately affecting the sediment transport and suspension processes in the wake of these structures. Such effects, however, remain today largely unexplored. In this work a science-based approach integrating state-of-the-art experimentation with high-resolution computational fluid dynamics is proposed as a powerful strategy for optimizing the performance of MHK devices and assessing environmental impacts. A novel numerical framework is developed for carrying out Large-Eddy Simulation (LES) in arbitrarily complex domains with embedded MHK devices. The model is able to resolve the geometrical complexity of real-life MHK devices using the Curvilinear Immersed Boundary (CURVIB) method along with a wall model for handling the flow near solid surfaces. Calculations are carried out for an axial flow hydrokinetic turbine mounted on the bed of rectangular open channel on a grid with nearly 200 million grid nodes. The approach flow corresponds to fully developed turbulent open channel flow and is obtained from a separate LES calculation. The specific case corresponds to that studied

  1. Numerical Simulation of Natural Convection of a Nanofluid in an Inclined Heated Enclosure Using Two-Phase Lattice Boltzmann Method: Accurate Effects of Thermophoresis and Brownian Forces

    NASA Astrophysics Data System (ADS)

    Ahmed, Mahmoud; Eslamian, Morteza

    2015-07-01

    Laminar natural convection in differentially heated ( β = 0°, where β is the inclination angle), inclined ( β = 30° and 60°), and bottom-heated ( β = 90°) square enclosures filled with a nanofluid is investigated, using a two-phase lattice Boltzmann simulation approach. The effects of the inclination angle on Nu number and convection heat transfer coefficient are studied. The effects of thermophoresis and Brownian forces which create a relative drift or slip velocity between the particles and the base fluid are included in the simulation. The effect of thermophoresis is considered using an accurate and quantitative formula proposed by the authors. Some of the existing results on natural convection are erroneous due to using wrong thermophoresis models or simply ignoring the effect. Here we show that thermophoresis has a considerable effect on heat transfer augmentation in laminar natural convection. Our non-homogenous modeling approach shows that heat transfer in nanofluids is a function of the inclination angle and Ra number. It also reveals some details of flow behavior which cannot be captured by single-phase models. The minimum heat transfer rate is associated with β = 90° (bottom-heated) and the maximum heat transfer rate occurs in an inclination angle which varies with the Ra number.

  2. Numerical Simulation of Natural Convection of a Nanofluid in an Inclined Heated Enclosure Using Two-Phase Lattice Boltzmann Method: Accurate Effects of Thermophoresis and Brownian Forces.

    PubMed

    Ahmed, Mahmoud; Eslamian, Morteza

    2015-12-01

    Laminar natural convection in differentially heated (β = 0°, where β is the inclination angle), inclined (β = 30° and 60°), and bottom-heated (β = 90°) square enclosures filled with a nanofluid is investigated, using a two-phase lattice Boltzmann simulation approach. The effects of the inclination angle on Nu number and convection heat transfer coefficient are studied. The effects of thermophoresis and Brownian forces which create a relative drift or slip velocity between the particles and the base fluid are included in the simulation. The effect of thermophoresis is considered using an accurate and quantitative formula proposed by the authors. Some of the existing results on natural convection are erroneous due to using wrong thermophoresis models or simply ignoring the effect. Here we show that thermophoresis has a considerable effect on heat transfer augmentation in laminar natural convection. Our non-homogenous modeling approach shows that heat transfer in nanofluids is a function of the inclination angle and Ra number. It also reveals some details of flow behavior which cannot be captured by single-phase models. The minimum heat transfer rate is associated with β = 90° (bottom-heated) and the maximum heat transfer rate occurs in an inclination angle which varies with the Ra number. PMID:26183389

  3. Numerical Modeling of 3-D Dynamics of Ultrasound Contrast Agent Microbubbles Using the Boundary Integral Method

    NASA Astrophysics Data System (ADS)

    Calvisi, Michael; Manmi, Kawa; Wang, Qianxi

    2014-11-01

    Ultrasound contrast agents (UCAs) are microbubbles stabilized with a shell typically of lipid, polymer, or protein and are emerging as a unique tool for noninvasive therapies ranging from gene delivery to tumor ablation. The nonspherical dynamics of contrast agents are thought to play an important role in both diagnostic and therapeutic applications, for example, causing the emission of subharmonic frequency components and enhancing the uptake of therapeutic agents across cell membranes and tissue interfaces. A three-dimensional model for nonspherical contrast agent dynamics based on the boundary integral method is presented. The effects of the encapsulating shell are approximated by adapting Hoff's model for thin-shell, spherical contrast agents to the nonspherical case. A high-quality mesh of the bubble surface is maintained by implementing a hybrid approach of the Lagrangian method and elastic mesh technique. Numerical analyses for the dynamics of UCAs in an infinite liquid and near a rigid wall are performed in parameter regimes of clinical relevance. The results show that the presence of a coating significantly reduces the oscillation amplitude and period, increases the ultrasound pressure amplitude required to incite jetting, and reduces the jet width and velocity.

  4. Predicting geomorphic evolution through integration of numerical-model scenarios and topographic/bathymetric-survey updates

    NASA Astrophysics Data System (ADS)

    Plant, N. G.; Long, J.; Dalyander, S.; Thompson, D.; Miselis, J. L.

    2013-12-01

    Natural resource and hazard management of barrier islands requires an understanding of geomorphic changes associated with long-term processes and storms. Uncertainty exists in understanding how long-term processes interact with the geomorphic changes caused by storms and the resulting perturbations of the long-term evolution trajectories. We use high-resolution data sets to initialize and correct high-fidelity numerical simulations of oceanographic forcing and resulting barrier island evolution. We simulate two years of observed storms to determine the individual and cumulative impacts of these events. Results are separated into cross-shore and alongshore components of sediment transport and compared with observed topographic and bathymetric changes during these time periods. The discrete island change induced by these storms is integrated with previous knowledge of long-term net alongshore sediment transport to project island evolution. The approach has been developed and tested using data collected at the Chandeleur Island chain off the coast of Louisiana (USA). The simulation time period included impacts from tropical and winter storms, as well as a human-induced perturbation associated with construction of a sand berm along the island shoreline. The predictions and observations indicated that storm and long-term processes both contribute to the migration, lowering, and disintegration of the artificial berm and natural island. Further analysis will determine the relative importance of cross-shore and alongshore sediment transport processes and the dominant time scales that drive each of these processes and subsequent island morphologic response.

  5. Morphology and dynamics of piercement structures: an integrated laboratory and numerical study

    NASA Astrophysics Data System (ADS)

    Galland, Olivier; Gisler, Galen R.; Haug, Øystein T.

    2013-04-01

    Piercement structures are numerous in many geological settings, including pockmarks, mud volcanoes, hydrothermal vents, maar-diatreme volcanoes, volcanic conduits in stratovolcanoes, and kimberlite volcanoes. These piercement structures exhibit various shapes, from sub-vertical pipes piercing through the country rock to open and wide conduits, such as volcanic craters resulting from volcanic explosions (e.g., Mount Pinatubo). In this contribution, we present an integrated laboratory/numerical study to constrain the dynamics of piercement structures and unravel the processes that control their morphology. The laboratory experiments consist of a Hele-Shaw cell filled with a pack of cohesive fine-grained granular material, at the bottom of which a volume V t of pressurized air is injected at high velocity. As a result of air injection, a piercement structure develops through the medium, and its morphology and evolution is monitored with an ultra-fast camera. We varied systematically the thickness of the model h and the injection pressure P , and show that two morphologies of piercement structures develop: vertical and V-shaped conduits. In a phase diagram with h and P as horizontal and vertical axes, respectively, the two morphologies group into two distinct domains separated by a transition line of critical slope P-h. This phase diagram shows that vertical conduits form for high P /low h, whereas V-shaped conduits form for low P /high h. 2D numerical simulations are performed using Sage, a finite volume hydrocode developed at the Los Alamos National Laboratory. We ran simulations and varied systematically the input pressure P and the strength of the country rock T . Our simulations produced three types of piercement structures: vertical, sub-horizontal and V-shaped conduits. In a phase diagram with T and P as horizontal and vertical axes, respectively, the three morphologies group into distinct domains separated by transition lines of critical slopes P-T . Vertical

  6. Numerical methods for the simulation of complex multi-body flows with applications for the integrated Space Shuttle vehicle

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    1992-01-01

    The following papers are presented: (1) numerical methods for the simulation of complex multi-body flows with applications for the Integrated Space Shuttle vehicle; (2) a generalized scheme for 3-D hyperbolic grid generation; (3) collar grids for intersecting geometric components within the Chimera overlapped grid scheme; and (4) application of the Chimera overlapped grid scheme to simulation of Space Shuttle ascent flows.

  7. New methods for the numerical integration of ordinary differential equations and their application to the equations of motion of spacecraft

    NASA Technical Reports Server (NTRS)

    Banyukevich, A.; Ziolkovski, K.

    1975-01-01

    A number of hybrid methods for solving Cauchy problems are described on the basis of an evaluation of advantages of single and multiple-point numerical integration methods. The selection criterion is the principle of minimizing computer time. The methods discussed include the Nordsieck method, the Bulirsch-Stoer extrapolation method, and the method of recursive Taylor-Steffensen power series.

  8. Sull'Integrazione delle Strutture Numeriche nella Scuola dell'Obbligo (Integrating Numerical Structures in Mandatory School).

    ERIC Educational Resources Information Center

    Bonotto, C.

    1995-01-01

    Attempted to verify knowledge regarding decimal and rational numbers in children ages 10-14. Discusses how pupils can receive and assimilate extensions of the number system from natural numbers to decimals and fractions and later can integrate this extension into a single and coherent numerical structure. (Author/MKR)

  9. Improving the numerical integration solution of satellite orbits in the presence of solar radiation pressure using modified back differences

    NASA Technical Reports Server (NTRS)

    Lundberg, J. B.; Feulner, M. R.; Abusali, P. A. M.; Ho, C. S.

    1991-01-01

    The method of modified back differences, a technique that significantly reduces the numerical integration errors associated with crossing shadow boundaries using a fixed-mesh multistep integrator without a significant increase in computer run time, is presented. While Hubbard's integral approach can produce significant improvements to the trajectory solution, the interpolation method provides the best overall results. It is demonstrated that iterating on the point mass term correction is also important for achieving the best overall results. It is also shown that the method of modified back differences can be implemented with only a small increase in execution time.

  10. Building a Framework Earthquake Cycle Deformational Model for Subduction Megathrust Zones: Integrating Observations with Numerical Models

    NASA Astrophysics Data System (ADS)

    Furlong, Kevin P.; Govers, Rob; Herman, Matthew

    2016-04-01

    last for decades after a major event (e.g. Alaska 1964) We have integrated the observed patterns of upper-plate displacements (and deformation) with models of subduction zone evolution that allow us to incorporate both the transient behavior associated with post-earthquake viscous re-equilibration and the underlying long term, relatively constant elastic strain accumulation. Modeling the earthquake cycle through the use of a visco-elastic numerical model over numerous earthquake cycles, we have developed a framework model for the megathrust cycle that is constrained by observations made at a variety of plate boundary zones at different stages in their earthquake cycle (see paper by Govers et al., this meeting). Our results indicate that the observed patterns of co- and post- and inter-seismic deformation are largely controlled by interplay between elastic and viscous processes. Observed displacements represent the competition between steady elastic-strain accumulation driven by plate boundary coupling, and post-earthquake viscous behavior in response to the coseismic loading of the system by the rapid elastic rebound. The application of this framework model to observations from subduction zone observatories points up the dangers of simply extrapolating current deformation observations to the overall strain accumulation state of the subduction zoned allows us to develop improved assessments of the slip deficit accumulating within the seismogenic zone, and the near-future earthquake potential of different segments of the subduction plate boundary.

  11. Numerical evaluation of multi-loop integrals for arbitrary kinematics with SecDec 2.0

    NASA Astrophysics Data System (ADS)

    Borowka, Sophia; Carter, Jonathon; Heinrich, Gudrun

    2013-02-01

    We present the program SecDec 2.0, which contains various new features. First, it allows the numerical evaluation of multi-loop integrals with no restriction on the kinematics. Dimensionally regulated ultraviolet and infrared singularities are isolated via sector decomposition, while threshold singularities are handled by a deformation of the integration contour in the complex plane. As an application, we present numerical results for various massive two-loop four-point diagrams. SecDec 2.0 also contains new useful features for the calculation of more general parameter integrals, related for example to phase space integrals. Program summaryProgram title: SecDec 2.0 Catalogue identifier: AEIR_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIR_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 156829 No. of bytes in distributed program, including test data, etc.: 2137907 Distribution format: tar.gz Programming language: Wolfram Mathematica, Perl, Fortran/C++. Computer: From a single PC to a cluster, depending on the problem. Operating system: Unix, Linux. RAM: Depending on the complexity of the problem Classification: 4.4, 5, 11.1. Catalogue identifier of previous version: AEIR_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182(2011)1566 Does the new version supersede the previous version?: Yes Nature of problem: Extraction of ultraviolet and infrared singularities from parametric integrals appearing in higher order perturbative calculations in gauge theories. Numerical integration in the presence of integrable singularities (e.g., kinematic thresholds). Solution method: Algebraic extraction of singularities in dimensional regularization using iterated sector decomposition. This leads to a Laurent series in the dimensional regularization

  12. Integrated field and numerical modeling investigation of crustal flow mechanisms and trajectories in migmatite domes

    NASA Astrophysics Data System (ADS)

    Whitney, Donna; Teyssier, Christian; Rey, Patrice

    2016-04-01

    Integrated field-based and modeling studies provide information about the driving mechanisms and internal dynamics of migmatite domes, which are important structures for understanding the rheology of the lithosphere in orogens. Dome-forming processes range from extension (isostasy) driven flow to density (buoyancy) driven systems. Vertical flow (up or down) is on the scale of tens of km. End-member buoyancy-driven domes are typically Archean (e.g., Pilbara, Australia). Extension-driven systems include the migmatite domes in metamorphic core complexes of the northern North American Cordillera, as well as some domes in Variscan core complexes. The Entia dome of central Australia is a possible hybrid dome in which extension and density inversion were both involved in dome formation. The Entia is a "double dome", comprised of a steep high-strain zone bordered by high melt-fraction migmatite (subdomes). Field and numerical modeling studies show that these are characteristics of extension-driven domes, which form when flowing deep crust ascends beneath normal faults in the upper crust. Entia dome migmatite shows abundant evidence for extension, in addition to sequences of cascading, cuspate folds (well displayed in amphibolite) that are not present in the carapace of the dome, that do not have a consistent axial planar fabric, and that developed primarily at subsolidus conditions. We propose that these folds developed in mafic layers that had a density contrast with granodioritic migmatite, and that they formed during sinking of a denser layer above the rising migmatite subdomes. Extension-driven flow of partially molten (granodioritic) crust was therefore accompanied by sinking of a dense, mafic, mid-crustal layer, resulting in complex P-T-d paths of different lithologic units within the dome. This scenario is consistent with field and 2D modeling results, which together show how a combination of structural geology, metamorphic petrology, and modeling can illuminate the

  13. SINDA'85/FLUINT - SYSTEMS IMPROVED NUMERICAL DIFFERENCING ANALYZER AND FLUID INTEGRATOR (CONVEX VERSION)

    NASA Technical Reports Server (NTRS)

    Cullimore, B.

    1994-01-01

    SINDA, the Systems Improved Numerical Differencing Analyzer, is a software system for solving lumped parameter representations of physical problems governed by diffusion-type equations. SINDA was originally designed for analyzing thermal systems represented in electrical analog, lumped parameter form, although its use may be extended to include other classes of physical systems which can be modeled in this form. As a thermal analyzer, SINDA can handle such interrelated phenomena as sublimation, diffuse radiation within enclosures, transport delay effects, and sensitivity analysis. FLUINT, the FLUid INTegrator, is an advanced one-dimensional fluid analysis program that solves arbitrary fluid flow networks. The working fluids can be single phase vapor, single phase liquid, or two phase. The SINDA'85/FLUINT system permits the mutual influences of thermal and fluid problems to be analyzed. The SINDA system consists of a programming language, a preprocessor, and a subroutine library. The SINDA language is designed for working with lumped parameter representations and finite difference solution techniques. The preprocessor accepts programs written in the SINDA language and converts them into standard FORTRAN. The SINDA library consists of a large number of FORTRAN subroutines that perform a variety of commonly needed actions. The use of these subroutines can greatly reduce the programming effort required to solve many problems. A complete run of a SINDA'85/FLUINT model is a four step process. First, the user's desired model is run through the preprocessor which writes out data files for the processor to read and translates the user's program code. Second, the translated code is compiled. The third step requires linking the user's code with the processor library. Finally, the processor is executed. SINDA'85/FLUINT program features include 20,000 nodes, 100,000 conductors, 100 thermal submodels, and 10 fluid submodels. SINDA'85/FLUINT can also model two phase flow

  14. Brain Structural Integrity and Intrinsic Functional Connectivity Forecast 6 Year Longitudinal Growth in Children's Numerical Abilities

    PubMed Central

    Kochalka, John; Ngoon, Tricia J.; Wu, Sarah S.; Qin, Shaozheng; Battista, Christian

    2015-01-01

    Early numerical proficiency lays the foundation for acquiring quantitative skills essential in today's technological society. Identification of cognitive and brain markers associated with long-term growth of children's basic numerical computation abilities is therefore of utmost importance. Previous attempts to relate brain structure and function to numerical competency have focused on behavioral measures from a single time point. Thus, little is known about the brain predictors of individual differences in growth trajectories of numerical abilities. Using a longitudinal design, with multimodal imaging and machine-learning algorithms, we investigated whether brain structure and intrinsic connectivity in early childhood are predictive of 6 year outcomes in numerical abilities spanning childhood and adolescence. Gray matter volume at age 8 in distributed brain regions, including the ventrotemporal occipital cortex (VTOC), the posterior parietal cortex, and the prefrontal cortex, predicted longitudinal gains in numerical, but not reading, abilities. Remarkably, intrinsic connectivity analysis revealed that the strength of functional coupling among these regions also predicted gains in numerical abilities, providing novel evidence for a network of brain regions that works in concert to promote numerical skill acquisition. VTOC connectivity with posterior parietal, anterior temporal, and dorsolateral prefrontal cortices emerged as the most extensive network predicting individual gains in numerical abilities. Crucially, behavioral measures of mathematics, IQ, working memory, and reading did not predict children's gains in numerical abilities. Our study identifies, for the first time, functional circuits in the human brain that scaffold the development of numerical skills, and highlights potential biomarkers for identifying children at risk for learning difficulties. SIGNIFICANCE STATEMENT Children show substantial individual differences in math abilities and ease of math

  15. The determination of molecular structures by density functional theory. The evaluation of analytical energy gradients by numerical integration

    NASA Astrophysics Data System (ADS)

    Versluis, Louis; Ziegler, Tom

    1988-01-01

    An algorithm, based on numerical integration, has been proposed for the evaluation of analytical energy gradients within the Hartree-Fock-Slater (HFS) method. The utility of this algorithm in connection with molecular structure optimization is demonstrated by calculations on organics, main group molecules, and transition metal complexes. The structural parameters obtained from HFS calculations are in at least as good agreement with experiment as structures obtained from ab initio HF calculations. The time required to evaluate the energy gradient by numerical integration constitutes only a fraction (40%-25%) of the elapsed time in a full HFS-SCF calculation. The algorithm is also suitable for density functional methods with exchange-correlation potential different from that employed in the HFS method.

  16. On testing a subroutine for the numerical integration of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Krogh, F. T.

    1973-01-01

    This paper discusses how to numerically test a subroutine for the solution of ordinary differential equations. Results obtained with a variable order Adams method are given for eleven simple test cases.-

  17. Assessing the bio-mitigation effect of integrated multi-trophic aquaculture on marine environment by a numerical approach.

    PubMed

    Zhang, Junbo; Kitazawa, Daisuke

    2016-09-15

    With increasing concern over the aquatic environment in marine culture, the integrated multi-trophic aquaculture (IMTA) has received extensive attention in recent years. A three-dimensional numerical ocean model is developed to explore the negative impacts of aquaculture wastes and assess the bio-mitigation effect of IMTA systems on marine environments. Numerical results showed that the concentration of surface phytoplankton could be controlled by planting seaweed (a maximum reduction of 30%), and the percentage change in the improvement of bottom dissolved oxygen concentration increased to 35% at maximum due to the ingestion of organic wastes by sea cucumbers. Numerical simulations indicate that seaweeds need to be harvested in a timely manner for maximal absorption of nutrients, and the initial stocking density of sea cucumbers >3.9 individuals m(-2) is preferred to further eliminate the organic wastes sinking down to the sea bottom. PMID:27368928

  18. Numerical integral methods to study plasmonic modes in a photonic crystal waveguide with circular inclusions that involve a metamaterial

    NASA Astrophysics Data System (ADS)

    Mendoza-Suárez, A.; Pérez-Aguilar, H.

    2016-09-01

    We present several numerical integral methods for the study of a photonic crystal waveguide, formed by two parallel conducting plates and an array of circular inclusions involving a conducting material and a metamaterial. Band structures and reflectance were calculated, for infinite and finite photonic crystal waveguides, respectively. The numerical results obtained show that the numerical methods applied provide good accuracy and efficiency. An interesting detail that resulted from this study was the appearance of a propagating mode in a band gap due to defects in the middle of the photonic crystal waveguide. This is equivalent to dope a semiconductor to introduce allowed energy states within a band gap. Our main interest in this work is to model photonic crystal waveguides that involve left-handed materials (LHMs). For the specific LHM considered, a surface plasmon mode on the vacuum-LHM interface was found.

  19. A numerical solution for two-dimensional Fredholm integral equations of the second kind with kernels of the logarithmic potential form

    NASA Technical Reports Server (NTRS)

    Gabrielsen, R. E.; Uenal, A.

    1981-01-01

    Two dimensional Fredholm integral equations with logarithmic potential kernels are numerically solved. The explicit consequence of these solutions to their true solutions is demonstrated. The results are based on a previous work in which numerical solutions were obtained for Fredholm integral equations of the second kind with continuous kernels.

  20. On time discretizations for spectral methods. [numerical integration of Fourier and Chebyshev methods for dynamic partial differential equations

    NASA Technical Reports Server (NTRS)

    Gottlieb, D.; Turkel, E.

    1980-01-01

    New methods are introduced for the time integration of the Fourier and Chebyshev methods of solution for dynamic differential equations. These methods are unconditionally stable, even though no matrix inversions are required. Time steps are chosen by accuracy requirements alone. For the Fourier method both leapfrog and Runge-Kutta methods are considered. For the Chebyshev method only Runge-Kutta schemes are tested. Numerical calculations are presented to verify the analytic results. Applications to the shallow water equations are presented.

  1. An integrated approach for non-periodic dynamic response prediction of complex structures: Numerical and experimental analysis

    NASA Astrophysics Data System (ADS)

    Rahneshin, Vahid; Chierichetti, Maria

    2016-09-01

    In this paper, a combined numerical and experimental method, called Extended Load Confluence Algorithm, is presented to accurately predict the dynamic response of non-periodic structures when little or no information about the applied loads is available. This approach, which falls into the category of Shape Sensing methods, inputs limited experimental information acquired from sensors to a mapping algorithm that predicts the response at unmeasured locations. The proposed algorithm consists of three major cores: an experimental core for data acquisition, a numerical core based on Finite Element Method for modeling the structure, and a mapping algorithm that improves the numerical model based on a modal approach in the frequency domain. The robustness and precision of the proposed algorithm are verified through numerical and experimental examples. The results of this paper demonstrate that without a precise knowledge of the loads acting on the structure, the dynamic behavior of the system can be predicted in an effective and precise manner after just a few iterations.

  2. Multiple piezo-patch energy harvesters integrated to a thin plate with AC-DC conversion: analytical modeling and numerical validation

    NASA Astrophysics Data System (ADS)

    Aghakhani, Amirreza; Basdogan, Ipek; Erturk, Alper

    2016-04-01

    Plate-like components are widely used in numerous automotive, marine, and aerospace applications where they can be employed as host structures for vibration based energy harvesting. Piezoelectric patch harvesters can be easily attached to these structures to convert the vibrational energy to the electrical energy. Power output investigations of these harvesters require accurate models for energy harvesting performance evaluation and optimization. Equivalent circuit modeling of the cantilever-based vibration energy harvesters for estimation of electrical response has been proposed in recent years. However, equivalent circuit formulation and analytical modeling of multiple piezo-patch energy harvesters integrated to thin plates including nonlinear circuits has not been studied. In this study, equivalent circuit model of multiple parallel piezoelectric patch harvesters together with a resistive load is built in electronic circuit simulation software SPICE and voltage frequency response functions (FRFs) are validated using the analytical distributedparameter model. Analytical formulation of the piezoelectric patches in parallel configuration for the DC voltage output is derived while the patches are connected to a standard AC-DC circuit. The analytic model is based on the equivalent load impedance approach for piezoelectric capacitance and AC-DC circuit elements. The analytic results are validated numerically via SPICE simulations. Finally, DC power outputs of the harvesters are computed and compared with the peak power amplitudes in the AC output case.

  3. Integration of numerical modeling and observations for the Gulf of Naples monitoring network

    NASA Astrophysics Data System (ADS)

    Iermano, I.; Uttieri, M.; Zambianchi, E.; Buonocore, B.; Cianelli, D.; Falco, P.; Zambardino, G.

    2012-04-01

    Lethal effects of mineral oils on fragile marine and coastal ecosystems are now well known. Risks and damages caused by a maritime accident can be reduced with the help of better forecasts and efficient monitoring systems. The MED project TOSCA (Tracking Oil Spills and Coastal Awareness Network), which gathers 13 partners from 4 Mediterranean countries, has been designed to help create a better response system to maritime accidents. Through the construction of an observational network, based on state of the art technology (HF radars and drifters), TOSCA provides real-time observations and forecasts of the Mediterranean coastal marine environmental conditions. The system is installed and assessed in five test sites on the coastal areas of oil spill outlets (Eastern Mediterranean) and on high traffic areas (Western Mediterranean). The Gulf of Naples, a small semi-closed basin opening to the Tyrrhenian Sea is one of the five test-sites. It is of particular interest from both the environmental point of view, due to peculiar ecosystem properties in the area, and because it sustains important touristic and commercial activities. Currently the Gulf of Naples monitoring network is represented by five automatic weather stations distributed along the coasts of the Gulf, one weather radar, two tide gauges, one waverider buoy, and moored physical, chemical and bio-optical instrumentation. In addition, a CODAR-SeaSonde HF coastal radar system composed of three antennas is located in Portici, Massa Lubrense and Castellammare. The system provides hourly data of surface currents over the entire Gulf with a 1km spatial resolution. A numerical modeling implementation based on Regional Ocean Modeling System (ROMS) is actually integrated in the Gulf of Naples monitoring network. ROMS is a 3-D, free-surface, hydrostatic, primitive equation, finite difference ocean model. In our configuration, the model has high horizontal resolution (250m), and 30 sigma levels in the vertical. Thanks

  4. Numerical simulation of installation process and uplift resistance for an integrated suction foundation in deep ocean

    NASA Astrophysics Data System (ADS)

    Li, Ying; Yang, Shu-geng; Yu, Shu-ming

    2016-03-01

    A concept design, named integrated suction foundation, is proposed for a tension leg platform (TLP) in deep ocean. The most important improvement in comparing with the traditional one is that a pressure-resistant storage module is designed. It utilizes the high hydrostatic pressure in deep ocean to drive water into the module to generate negative pressure for bucket suction. This work aims to further approve the feasibility of the concept design in the aspect of penetration installation and the uplift force in-place. Seepage is generated during suction penetration, and can have both positive and negative effects on penetration process. To study the effect of seepage on the penetration process of the integrated suction foundation, finite element analysis (FEA) is carried out in this work. In particular, an improved methodology to calculate the penetration resistance is proposed for the integrated suction foundation with respect to the reduction factor of penetration resistance. The maximum allowable negative pressure during suction penetration is calculated with the critical hydraulic gradient method through FEA. The simulation results of the penetration process show that the integrated suction foundation can be installed safely. Moreover, the uplift resistance of the integrated suction foundation is calculated and the feasibility of the integrated suction foundation working on-site is verified. In all, the analysis in this work further approves the feasibility of the integrated suction foundation for TLPs in deep ocean applications.

  5. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    NASA Astrophysics Data System (ADS)

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; Beerli, Peter; Zeng, Xiankui; Lu, Dan; Tao, Yuezan

    2016-02-01

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamic integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. The thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.

  6. cuSwift --- a suite of numerical integration methods for modelling planetary systems implemented in C/CUDA

    NASA Astrophysics Data System (ADS)

    Hellmich, S.; Mottola, S.; Hahn, G.; Kührt, E.; Hlawitschka, M.

    2014-07-01

    Simulations of dynamical processes in planetary systems represent an important tool for studying the orbital evolution of the systems [1--3]. Using modern numerical integration methods, it is possible to model systems containing many thousands of objects over timescales of several hundred million years. However, in general, supercomputers are needed to get reasonable simulation results in acceptable execution times [3]. To exploit the ever-growing computation power of Graphics Processing Units (GPUs) in modern desktop computers, we implemented cuSwift, a library of numerical integration methods for studying long-term dynamical processes in planetary systems. cuSwift can be seen as a re-implementation of the famous SWIFT integrator package written by Hal Levison and Martin Duncan. cuSwift is written in C/CUDA and contains different integration methods for various purposes. So far, we have implemented three algorithms: a 15th-order Radau integrator [4], the Wisdom-Holman Mapping (WHM) integrator [5], and the Regularized Mixed Variable Symplectic (RMVS) Method [6]. These algorithms treat only the planets as mutually gravitationally interacting bodies whereas asteroids and comets (or other minor bodies of interest) are treated as massless test particles which are gravitationally influenced by the massive bodies but do not affect each other or the massive bodies. The main focus of this work is on the symplectic methods (WHM and RMVS) which use a larger time step and thus are capable of integrating many particles over a large time span. As an additional feature, we implemented the non-gravitational Yarkovsky effect as described by M. Brož [7]. With cuSwift, we show that the use of modern GPUs makes it possible to speed up these methods by more than one order of magnitude compared to the single-core CPU implementation, thereby enabling modest workstation computers to perform long-term dynamical simulations. We use these methods to study the influence of the Yarkovsky

  7. Mixing-to-eruption timescales: an integrated model combining numerical simulations and high-temperature experiments with natural melts

    NASA Astrophysics Data System (ADS)

    Montagna, Chiara; Perugini, Diego; De Campos, Christina; Longo, Antonella; Dingwell, Donald Bruce; Papale, Paolo

    2015-04-01

    Arrival of magma from depth into shallow reservoirs and associated mixing processes have been documented as possible triggers of explosive eruptions. Quantifying the timing from beginning of mixing to eruption is of fundamental importance in volcanology in order to put constraints about the possible onset of a new eruption. Here we integrate numerical simulations and high-temperature experiment performed with natural melts with the aim to attempt identifying the mixing-to-eruption timescales. We performed two-dimensional numerical simulations of the arrival of gas-rich magmas into shallow reservoirs. We solve the fluid dynamics for the two interacting magmas evaluating the space-time evolution of the physical properties of the mixture. Convection and mingling develop quickly into the chamber and feeding conduit/dyke. Over time scales of hours, the magmas in the reservoir appear to have mingled throughout, and convective patterns become harder to identify. High-temperature magma mixing experiments have been performed using a centrifuge and using basaltic and phonolitic melts from Campi Flegrei (Italy) as initial end-members. Concentration Variance Decay (CVD), an inevitable consequence of magma mixing, is exponential with time. The rate of CVD is a powerful new geochronometer for the time from mixing to eruption/quenching. The mingling-to-eruption time of three explosive volcanic eruptions from Campi Flegrei (Italy) yield durations on the order of tens of minutes. These results are in perfect agreement with the numerical simulations that suggest a maximum mixing time of a few hours to obtain a hybrid mixture. We show that integration of numerical simulation and high-temperature experiments can provide unprecedented results about mixing processes in volcanic systems. The combined application of numerical simulations and CVD geochronometer to the eruptive products of active volcanoes could be decisive for the preparation of hazard mitigation during volcanic unrest.

  8. Promoting the Development of an Integrated Numerical Representation through the Coordination of Physical Materials

    ERIC Educational Resources Information Center

    Vitale, Jonathan

    2012-01-01

    How do children use physical and virtual tools to develop new numerical knowledge? While concrete instructional materials may support the delivery of novel information to learners, they may also over-simplify the task, unintentionally reducing learners' performance in recall and transfer tasks. This reduction in testing performance may be…

  9. ICM: an Integrated Compartment Method for numerically solving partial differential equations

    SciTech Connect

    Yeh, G.T.

    1981-05-01

    An integrated compartment method (ICM) is proposed to construct a set of algebraic equations from a system of partial differential equations. The ICM combines the utility of integral formulation of finite element approach, the simplicity of interpolation of finite difference approximation, and the flexibility of compartment analyses. The integral formulation eases the treatment of boundary conditions, in particular, the Neumann-type boundary conditions. The simplicity of interpolation provides great economy in computation. The flexibility of discretization with irregular compartments of various shapes and sizes offers advantages in resolving complex boundaries enclosing compound regions of interest. The basic procedures of ICM are first to discretize the region of interest into compartments, then to apply three integral theorems of vectors to transform the volume integral to the surface integral, and finally to use interpolation to relate the interfacial values in terms of compartment values to close the system. The Navier-Stokes equations are used as an example of how to derive the corresponding ICM alogrithm for a given set of partial differential equations. Because of the structure of the algorithm, the basic computer program remains the same for cases in one-, two-, or three-dimensional problems.

  10. Numerical simulation of small perturbation transonic flows

    NASA Technical Reports Server (NTRS)

    Seebass, A. R.; Yu, N. J.

    1976-01-01

    The results of a systematic study of small perturbation transonic flows are presented. Both the flow over thin airfoils and the flow over wedges were investigated. Various numerical schemes were employed in the study. The prime goal of the research was to determine the efficiency of various numerical procedures by accurately evaluating the wave drag, both by computing the pressure integral around the body and by integrating the momentum loss across the shock. Numerical errors involved in the computations that affect the accuracy of drag evaluations were analyzed. The factors that effect numerical stability and the rate of convergence of the iterative schemes were also systematically studied.

  11. Physical and mathematical justification of the numerical Brillouin zone integration of the Boltzmann rate equation by Gaussian smearing

    NASA Astrophysics Data System (ADS)

    Illg, Christian; Haag, Michael; Teeny, Nicolas; Wirth, Jens; Fähnle, Manfred

    2016-03-01

    Scatterings of electrons at quasiparticles or photons are very important for many topics in solid-state physics, e.g., spintronics, magnonics or photonics, and therefore a correct numerical treatment of these scatterings is very important. For a quantum-mechanical description of these scatterings, Fermi's golden rule is used to calculate the transition rate from an initial state to a final state in a first-order time-dependent perturbation theory. One can calculate the total transition rate from all initial states to all final states with Boltzmann rate equations involving Brillouin zone integrations. The numerical treatment of these integrations on a finite grid is often done via a replacement of the Dirac delta distribution by a Gaussian. The Dirac delta distribution appears in Fermi's golden rule where it describes the energy conservation among the interacting particles. Since the Dirac delta distribution is a not a function it is not clear from a mathematical point of view that this procedure is justified. We show with physical and mathematical arguments that this numerical procedure is in general correct, and we comment on critical points.

  12. Photocapacitance study at p-i-n photodiode by numerical C- V integration

    NASA Astrophysics Data System (ADS)

    Kavasoglu, A. Sertap; Kavasoglu, Nese; Oktik, Sener

    2009-02-01

    This paper describes a different numerical approach to estimate the impurity profile in a typical p-i-n device by using measured capacitance-voltage ( C- V) characteristics. The constructed numerical model has been found to provide an impurity profile which is almost consistent with those reported in the literature. Until now, no study of the anomalous capacitance response of the silicon p-i-n device induced by the space charge effects due to photo-generated carriers has been reported. In this study, we unveiled this anomalous behaviour through illuminated C- V characteristics. The illuminated C- V result of BPW34 exhibits capacitance oscillations. This behaviour could be augmented by the density of states discontinuities in intrinsic silicon [Biswajit Das. Observation of capacitance-voltage oscillations in porous silicon. Physica E: Low-dimens Syst Nanostruct 2004;23(1-2):141-46].

  13. A stochastic regulator for integrated communication and control systems. I - Formulation of control law. II - Numerical analysis and simulation

    NASA Technical Reports Server (NTRS)

    Liou, Luen-Woei; Ray, Asok

    1991-01-01

    A state feedback control law for integrated communication and control systems (ICCS) is formulated by using the dynamic programming and optimality principle on a finite-time horizon. The control law is derived on the basis of a stochastic model of the plant which is augmented in state space to allow for the effects of randomly varying delays in the feedback loop. A numerical procedure for synthesizing the control parameters is then presented, and the performance of the control law is evaluated by simulating the flight dynamics model of an advanced aircraft. Finally, recommendations for future work are made.

  14. Integrating Laboratory and Numerical Decompression Experiments to Investigate Fluid Dynamics into the Conduit

    NASA Astrophysics Data System (ADS)

    Spina, Laura; Colucci, Simone; De'Michieli Vitturi, Mattia; Scheu, Bettina; Dingwell, Donald Bruce

    2015-04-01

    The study of the fluid dynamics of magmatic melts into the conduit, where direct observations are unattainable, was proven to be strongly enhanced by multiparametric approaches. Among them, the coupling of numerical modeling with laboratory experiments represents a fundamental tool of investigation. Indeed, the experimental approach provide invaluable data to validate complex multiphase codes. We performed decompression experiments in a shock tube system, using pure silicon oil as a proxy for the basaltic melt. A range of viscosity comprised between 1 and 1000 Pa s was investigated. The samples were saturated with Argon for 72h at 10MPa, before being slowly decompressed to atmospheric pressure. The evolution of the analogue magmatic system was monitored through a high speed camera and pressure sensors, located into the analogue conduit. The experimental decompressions have then been reproduced numerically using a multiphase solver based on OpenFOAM framework. The original compressible multiphase Openfoam solver twoPhaseEulerFoam was extended to take into account the multicomponent nature of the fluid mixtures (liquid and gas) and the phase transition. According to the experimental conditions, the simulations were run with values of fluid viscosity ranging from 1 to 1000 Pa s. The sensitivity of the model has been tested for different values of the parameters t and D, representing respectively the relaxation time for gas exsolution and the average bubble diameter, required by the Gidaspow drag model. Valuable range of values for both parameters are provided from experimental observations, i.e. bubble nucleation time and bubble size distribution at a given pressure. The comparison of video images with the outcomes of the numerical models was performed by tracking the evolution of the gas volume fraction through time. Therefore, we were able to calibrate the parameter of the model by laboratory results, and to track the fluid dynamics of experimental decompression.

  15. Numerical integration of nearly-Hamiltonian systems. [Van der Pol oscillator and perturbed Keplerian motion

    NASA Technical Reports Server (NTRS)

    Bond, V. R.

    1978-01-01

    The reported investigation is concerned with the solution of systems of differential equations which are derived from a Hamiltonian function in the extended phase space. The problem selected involves a one-dimensional perturbed harmonic oscillator. The van der Pol equation considered has an exact asymptotic value for its amplitude. Comparisons are made between a numerical solution and a known analytical solution. In addition to the van der Pol problem, known solutions regarding the restricted problem of three bodies are used as examples for perturbed Keplerian motion. The extended phase space Hamiltonian discussed by Stiefel and Scheifele (1971) is considered. A description is presented of two canonical formulations of the perturbed harmonic oscillator.

  16. On the accuracy and convergence of implicit numerical integration of finite element generated ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Soliman, M. O.

    1978-01-01

    A study of accuracy and convergence of linear functional finite element solution to linear parabolic and hyperbolic partial differential equations is presented. A variable-implicit integration procedure is employed for the resultant system of ordinary differential equations. Accuracy and convergence is compared for the consistent and two lumped assembly procedures for the identified initial-value matrix structure. Truncation error estimation is accomplished using Richardson extrapolation.

  17. What is the diffraction limit? From Airy to Abbe using direct numerical integration

    NASA Astrophysics Data System (ADS)

    Calm, Y. M.; Merlo, J. M.; Burns, M. J.; Kempa, K.; Naughton, M. J.

    The resolution of a conventional optical microscope is sometimes taken from Airy's point spread function (PSF), 0 . 61 λ / NA , and sometimes from Abbe, λ / 2 NA , where NA is the numerical aperture, however modern fluorescence and near-field optical microscopies achieve spatial resolution far better than either of these limits. There is a new category of 2D metamaterials called planar optical elements (POEs), which have a microscopic thickness (< λ), macroscopic transverse dimensions (> 100 λ), and are composed of an array of nanostructured light scatterers. POEs are found in a range of micro- and nano-photonic technologies, and will influence the future optical nanoscopy. With this pretext, we shed some light on the 'diffraction limit' by numerically evaluating Kirchhoff's scalar formulae (in their exact form) and identifying the features of highly non-paraxial, 3D PSFs. We show that the Airy and Abbe criteria are connected, and we comment on the design rules for a particular type of POE: the flat lens. This work is supported by the W. M. Keck Foundation.

  18. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems

    PubMed Central

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-01-01

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted “useful” data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247

  19. Gocad2OGS: Workflow to Integrate Geo-structural Information into Numerical Simulation Models

    NASA Astrophysics Data System (ADS)

    Fischer, Thomas; Walther, Marc; Naumov, Dmitri; Sattler, Sabine; Kolditz, Olaf

    2015-04-01

    The investigation of fluid circulation in the Thuringian syncline is one of the INFLUINS project's targets. A 3D geo-structural model including 12 stratigraphic layers and 54 fault zones is created by geologists in the first step using the Gocad software. Within the INFLUINS project a ground-water flow simulation is used to check existing hypotheses and to gain new ideas of the underground fluid flow behaviour. We used the scientific, platform independent, open source software OpenGeoSys that implements the finite element method to solve the governing equations describing fluid flow in porous media. The geo-structural Gocad model is not suitable for the FEM numerical analysis. Therefore it is converted into an unstructured grid satisfying all mesh quality criteria required for the ground-water flow simulation. The resulting grid is stored in an open data format given by the Visualization Toolkit (vtk). In this work we present a workflow to convert geological structural models, created using the Gocad software, into a simulation model that is easy to use from numerical simulation software. We tested our workflow with the 3D geo-structural model of the Thuringian syncline and were able to setup and to evaluate a hydrogeological simulation model successfully.

  20. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    PubMed

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-01-01

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247

  1. On the formulation, parameter identification and numerical integration of the EMMI model :plasticity and isotropic damage.

    SciTech Connect

    Bammann, Douglas J.; Johnson, G. C. (University of California, Berkeley, CA); Marin, Esteban B.; Regueiro, Richard A.

    2006-01-01

    In this report we present the formulation of the physically-based Evolving Microstructural Model of Inelasticity (EMMI) . The specific version of the model treated here describes the plasticity and isotropic damage of metals as being currently applied to model the ductile failure process in structural components of the W80 program . The formulation of the EMMI constitutive equations is framed in the context of the large deformation kinematics of solids and the thermodynamics of internal state variables . This formulation is focused first on developing the plasticity equations in both the relaxed (unloaded) and current configurations. The equations in the current configuration, expressed in non-dimensional form, are used to devise the identification procedure for the plasticity parameters. The model is then extended to include a porosity-based isotropic damage state variable to describe the progressive deterioration of the strength and mechanical properties of metals induced by deformation . The numerical treatment of these coupled plasticity-damage constitutive equations is explained in detail. A number of examples are solved to validate the numerical implementation of the model.

  2. Model coupling methodology for thermo-hydro-mechanical-chemical numerical simulations in integrated assessment of long-term site behaviour

    NASA Astrophysics Data System (ADS)

    Kempka, Thomas; De Lucia, Marco; Kühn, Michael

    2015-04-01

    The integrated assessment of long-term site behaviour taking into account a high spatial resolution at reservoir scale requires a sophisticated methodology to represent coupled thermal, hydraulic, mechanical and chemical processes of relevance. Our coupling methodology considers the time-dependent occurrence and significance of multi-phase flow processes, mechanical effects and geochemical reactions (Kempka et al., 2014). Hereby, a simplified hydro-chemical coupling procedure was developed (Klein et al., 2013) and validated against fully coupled hydro-chemical simulations (De Lucia et al., 2015). The numerical simulation results elaborated for the pilot site Ketzin demonstrate that mechanical reservoir, caprock and fault integrity are maintained during the time of operation and that after 10,000 years CO2 dissolution is the dominating trapping mechanism and mineralization occurs on the order of 10 % to 25 % with negligible changes to porosity and permeability. De Lucia, M., Kempka, T., Kühn, M. A coupling alternative to reactive transport simulations for long-term prediction of chemical reactions in heterogeneous CO2 storage systems (2014) Geosci Model Dev Discuss 7:6217-6261. doi:10.5194/gmdd-7-6217-2014. Kempka, T., De Lucia, M., Kühn, M. Geomechanical integrity verification and mineral trapping quantification for the Ketzin CO2 storage pilot site by coupled numerical simulations (2014) Energy Procedia 63:3330-3338, doi:10.1016/j.egypro.2014.11.361. Klein E, De Lucia M, Kempka T, Kühn M. Evaluation of longterm mineral trapping at the Ketzin pilot site for CO2 storage: an integrative approach using geo-chemical modelling and reservoir simulation. Int J Greenh Gas Con 2013; 19:720-730. doi:10.1016/j.ijggc.2013.05.014.

  3. golem95: A numerical program to calculate one-loop tensor integrals with up to six external legs

    NASA Astrophysics Data System (ADS)

    Binoth, T.; Guillet, J.-Ph.; Heinrich, G.; Pilon, E.; Reiter, T.

    2009-11-01

    We present a program for the numerical evaluation of form factors entering the calculation of one-loop amplitudes with up to six external legs. The program is written in Fortran95 and performs the reduction to a certain set of basis integrals numerically, using a formalism where inverse Gram determinants can be avoided. It can be used to calculate one-loop amplitudes with massless internal particles in a fast and numerically stable way. Catalogue identifier: AEEO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 50 105 No. of bytes in distributed program, including test data, etc.: 241 657 Distribution format: tar.gz Programming language: Fortran95 Computer: Any computer with a Fortran95 compiler Operating system: Linux, Unix RAM: RAM used per form factor is insignificant, even for a rank six six-point form factor Classification: 4.4, 11.1 External routines: Perl programming language (http://www.perl.com/) Nature of problem: Evaluation of one-loop multi-leg tensor integrals occurring in the calculation of next-to-leading order corrections to scattering amplitudes in elementary particle physics. Solution method: Tensor integrals are represented in terms of form factors and a set of basic building blocks ("basis integrals"). The reduction to the basis integrals is

  4. Integrated Numerical Simulation of Thermo-Hydro-Chemical Phenomena Associated with Geologic Disposal of High-Level Radioactive Waste

    NASA Astrophysics Data System (ADS)

    Park, Sang-Uk; Kim, Jun-Mo; Kihm, Jung-Hwi

    2014-05-01

    A series of numerical simulations was performed using a multiphase thermo-hydro-chemical numerical model to predict integratedly and evaluate quantitatively thermo-hydro-chemical phenomena due to heat generation associated with geologic disposal of high-level radioactive waste. The average mineralogical composition of the fifteen unweathered igneous rock bodies, which were classified as granite, in Republic of Korea was adopted as an initial (primary) mineralogical composition of the host rock of the repository of high-level radioactive waste in the numerical simulations. The numerical simulation results show that temperature rises and thus convective groundwater flow occurs near the repository due to heat generation associated with geologic disposal of high-level radioactive waste. Under these circumstances, a series of water-rock interactions take place. As a result, among the primary minerals, quartz, plagioclase (albite), biotite (annite), and muscovite are dissolved. However, orthoclase is initially precipitated and is then dissolved, whereas microcline is initially dissolved and is then precipitated. On the other hand, the secondary minerals such as kaolinite, Na-smectite, chlorite, and hematite are precipitated and are then partly dissolved. In addition, such dissolution and precipitation of the primary and secondary minerals change groundwater chemistry (quality) and induce reactive chemical transport. As a result, in groundwater, Na+, Fe2+, and HCO3- concentrations initially decrease, whereas K+, AlO2-, and aqueous SiO2 concentrations initially increase. On the other hand, H+ concentration initially increases and thus pH initially decreases due to dissociation of groundwater in order to provide OH-, which is essential in precipitation of Na-smectite and chlorite. Thus, the above-mentioned numerical simulation results suggest that thermo-hydro-chemical numerical simulation can provide a better understanding of heat transport, groundwater flow, and reactive

  5. Long-Time Numerical Integration of the Three-Dimensional Wave Equation in the Vicinity of a Moving Source

    NASA Technical Reports Server (NTRS)

    Ryabenkii, V. S.; Turchaninov, V. I.; Tsynkov, S. V.

    1999-01-01

    We propose a family of algorithms for solving numerically a Cauchy problem for the three-dimensional wave equation. The sources that drive the equation (i.e., the right-hand side) are compactly supported in space for any given time; they, however, may actually move in space with a subsonic speed. The solution is calculated inside a finite domain (e.g., sphere) that also moves with a subsonic speed and always contains the support of the right-hand side. The algorithms employ a standard consistent and stable explicit finite-difference scheme for the wave equation. They allow one to calculate tile solution for arbitrarily long time intervals without error accumulation and with the fixed non-growing amount of tile CPU time and memory required for advancing one time step. The algorithms are inherently three-dimensional; they rely on the presence of lacunae in the solutions of the wave equation in oddly dimensional spaces. The methodology presented in the paper is, in fact, a building block for constructing the nonlocal highly accurate unsteady artificial boundary conditions to be used for the numerical simulation of waves propagating with finite speed over unbounded domains.

  6. Numerical comparison of spectral properties of volume-integral-equation formulations

    NASA Astrophysics Data System (ADS)

    Markkanen, Johannes; Ylä-Oijala, Pasi

    2016-07-01

    We study and compare spectral properties of various volume-integral-equation formulations. The equations are written for the electric flux, current, field, and potentials, and discretized with basis functions spanning the appropriate function spaces. Each formulation leads to eigenvalue distributions of different kind due to the effects of discretization procedure, namely, the choice of basis and testing functions. The discrete spectrum of the potential formulation reproduces the theoretically predicted spectrum almost exactly while the spectra of other formulations deviate from the ideal one. It is shown that the potential formulation has the spectral properties desired from the preconditioning perspective.

  7. Elementary Techniques of Numerical Integration and Their Computer Implementation. Applications of Elementary Calculus to Computer Science. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Unit 379.

    ERIC Educational Resources Information Center

    Motter, Wendell L.

    It is noted that there are some integrals which cannot be evaluated by determining an antiderivative, and these integrals must be subjected to other techniques. Numerical integration is one such method; it provides a sum that is an approximate value for some integral types. This module's purpose is to introduce methods of numerical integration and…

  8. Near-field dispersion of produced formation water (PFW) in the Adriatic Sea: an integrated numerical-chemical approach.

    PubMed

    Cianelli, D; Manfra, L; Zambianchi, E; Maggi, C; Cappiello, A; Famiglini, G; Mannozzi, M; Cicero, A M

    2008-05-01

    Produced formation waters (PFWs), a by-product of both oil and gas extraction, are separated from hydrocarbons onboard oil platforms and then discharged into the sea through submarine outfalls. The dispersion of PFWs into the environment may have a potential impact on marine ecosystems. We reproduce the initial PFW-seawater mixing process by means of the UM3 model applied to offshore natural gas platforms currently active in the Northern Adriatic Sea (Mediterranean Sea). Chemical analyses lead to the identification of a chemical tracer (diethylene glycol) which enables us to follow the fate of PFWs into receiving waters. The numerical simulations are realized in different seasonal conditions using both measured oceanographic data and tracer concentrations. The numerical results show the spatial and temporal plume development in different stratification and ambient current conditions. The analytical approach measures concentrations of the diethylene glycol at a maximum sampling distance of 25 m. The results show a good agreement between field observations and model predictions in the near-field area. The integration of numerical results with chemical analyses also provides new insight to plan and optimize PFW monitoring and discharge. PMID:18289661

  9. Numerical methods for the simulation of complex multi-body flows with applications for the integrated Space Shuttle vehicle

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    1992-01-01

    This project forms part of the long term computational effort to simulate the time dependent flow over the integrated Space Shuttle vehicle (orbiter, solid rocket boosters (SRB's), external tank (ET), and attach hardware) during its ascent mode for various nominal and abort flight conditions. Due to the limitations of experimental data such as wind tunnel wall effects and the difficulty of safely obtaining valid flight data, numerical simulations are undertaken to supplement the existing data base. This data can then be used to predict the aerodynamic behavior over a wide range of flight conditions. Existing computational results show relatively good overall comparison with experiments but further refinement is required to reduce numerical errors and to obtain finer agreements over a larger parameter space. One of the important goals of this project is to obtain better comparisons between numerical simulations and experiments. In the simulations performed so far, the geometry has been simplified in various ways to reduce the complexity so that useful results can be obtained in a reasonable time frame due to limitations in computer resources. In this project, the finer details of the major components of the Space Shuttle are modeled better by including more complexity in the geometry definition. Smaller components not included in early Space Shuttle simulations will now be modeled and gridded.

  10. Computational and numerical aspects of using the integral equation method for adhesive layer fracture mechanics analysis

    SciTech Connect

    Giurgiutiu, V.; Ionita, A.; Dillard, D.A.; Graffeo, J.K.

    1996-12-31

    Fracture mechanics analysis of adhesively bonded joints has attracted considerable attention in recent years. A possible approach to the analysis of adhesive layer cracks is to study a brittle adhesive between 2 elastic half-planes representing the substrates. A 2-material 3-region elasticity problem is set up and has to be solved. A modeling technique based on the work of Fleck, Hutchinson, and Suo is used. Two complex potential problems using Muskelishvili`s formulation are set up for the 3-region, 2-material model: (a) a distribution of edge dislocations is employed to simulate the crack and its near field; and (b) a crack-free problem is used to simulate the effect of the external loading applied in the far field. Superposition of the two problems is followed by matching tractions and displacements at the bimaterial boundaries. The Cauchy principal value integral is used to treat the singularities. Imposing the traction-free boundary conditions over the entire crack length yielded a linear system of two integral equations. The parameters of the problem are Dundurs` elastic mismatch coefficients, {alpha} and {beta}, and the ratio c/H representing the geometric position of the crack in the adhesive layer.

  11. Integrating experimental and numerical methods for a scenario-based quantitative assessment of subsurface energy storage options

    NASA Astrophysics Data System (ADS)

    Kabuth, Alina; Dahmke, Andreas; Hagrey, Said Attia al; Berta, Márton; Dörr, Cordula; Koproch, Nicolas; Köber, Ralf; Köhn, Daniel; Nolde, Michael; Tilmann Pfeiffer, Wolf; Popp, Steffi; Schwanebeck, Malte; Bauer, Sebastian

    2016-04-01

    Within the framework of the transition to renewable energy sources ("Energiewende"), the German government defined the target of producing 60 % of the final energy consumption from renewable energy sources by the year 2050. However, renewable energies are subject to natural fluctuations. Energy storage can help to buffer the resulting time shifts between production and demand. Subsurface geological structures provide large potential capacities for energy stored in the form of heat or gas on daily to seasonal time scales. In order to explore this potential sustainably, the possible induced effects of energy storage operations have to be quantified for both specified normal operation and events of failure. The ANGUS+ project therefore integrates experimental laboratory studies with numerical approaches to assess subsurface energy storage scenarios and monitoring methods. Subsurface storage options for gas, i.e. hydrogen, synthetic methane and compressed air in salt caverns or porous structures, as well as subsurface heat storage are investigated with respect to site prerequisites, storage dimensions, induced effects, monitoring methods and integration into spatial planning schemes. The conceptual interdisciplinary approach of the ANGUS+ project towards the integration of subsurface energy storage into a sustainable subsurface planning scheme is presented here, and this approach is then demonstrated using the examples of two selected energy storage options: Firstly, the option of seasonal heat storage in a shallow aquifer is presented. Coupled thermal and hydraulic processes induced by periodic heat injection and extraction were simulated in the open-source numerical modelling package OpenGeoSys. Situations of specified normal operation as well as cases of failure in operational storage with leaking heat transfer fluid are considered. Bench-scale experiments provided parameterisations of temperature dependent changes in shallow groundwater hydrogeochemistry. As a

  12. Mosaic-skeleton method as applied to the numerical solution of three-dimensional Dirichlet problems for the Helmholtz equation in integral form

    NASA Astrophysics Data System (ADS)

    Kashirin, A. A.; Smagin, S. I.; Taltykina, M. Yu.

    2016-04-01

    Interior and exterior three-dimensional Dirichlet problems for the Helmholtz equation are solved numerically. They are formulated as equivalent boundary Fredholm integral equations of the first kind and are approximated by systems of linear algebraic equations, which are then solved numerically by applying an iteration method. The mosaic-skeleton method is used to speed up the solution procedure.

  13. Numerical Modeling for Integrated Design of a DNAPL Partitioning Tracer Test

    NASA Astrophysics Data System (ADS)

    McCray, J. E.; Divine, C. E.; Dugan, P. J.; Wolf, L.; Boving, T.; Louth, M.; Brusseau, M. L.; Hayes, D.

    2002-12-01

    Partitioning tracer tests (PTTs) are commonly used to estimate the location and volume of nonaqueous-phase liquids (NAPLs) at contaminated groundwater sites. PTTs are completed before and after remediation efforts as one means to assess remediation effectiveness. PTT design is complex. Numerical models are invaluable tools for designing a PTT, particularly for designing flow rates and selecting tracers to ensure proper tracer breakthrough times, spatial design of injection-extraction wells and rates to maximize tracer capture, well-specific sampling density and frequency, and appropriate tracer-chemical masses. Generally, the design requires consideration of the following factors: type of contaminant; distribution of contaminant at the site, including location of hot spots; site hydraulic characteristics; measurement of the partitioning coefficients for the various tracers; the time allotted to conduct the PTT; evaluation of the magnitude and arrival time of the tracer breakthrough curves; duration of the tracer input pulse; maximum tracer concentrations; analytical detection limits for the tracers; estimation of the capture zone of the well field to tracer ensure mass balance and to limit residual tracer concentrations left in the subsurface; effect of chemical remediation agents on the PTT results, and disposal of the extracted tracer solution. These design principles are applied to a chemical-enhanced remediation effort for a chlorinated-solvent dense NAPL (DNAPL) site at Little Creek Naval Amphibious Base in Virginia Beach, Virginia. For this project, the hydrology and pre-PTT contaminant distribution were characterized using traditional methods (slug tests, groundwater and soil concentrations from monitoring wells, and geoprobe analysis), as well as membrane interface probe analysis. Additional wells were installed after these studies. Partitioning tracers were selected based on the primary DNAPL contaminants at the site, expected NAPL saturations

  14. Accurate stress resultants equations for laminated composite deep thick shells

    SciTech Connect

    Qatu, M.S.

    1995-11-01

    This paper derives accurate equations for the normal and shear force as well as bending and twisting moment resultants for laminated composite deep, thick shells. The stress resultant equations for laminated composite thick shells are shown to be different from those of plates. This is due to the fact the stresses over the thickness of the shell have to be integrated on a trapezoidal-like shell element to obtain the stress resultants. Numerical results are obtained and showed that accurate stress resultants are needed for laminated composite deep thick shells, especially if the curvature is not spherical.

  15. The lambda-scheme. [for numerical integration of Euler equation of compressible gas flow

    NASA Technical Reports Server (NTRS)

    Moretti, G.

    1979-01-01

    A method for integrating the Euler equations of gas dynamics for compressible flows in any hyperbolic case is presented. This method is applied to the Mach number distribution over a stretch of an infinite duct having a variable cross section, and to the distribution in a channel opening into a vacuum with the Mach number equalling 1.04. An example of the ability of this method to handle two-dimensional unsteady flows is shown using the steady shock-and-isobars pattern reached asymptotically about an ablated blunt body with a free stream Mach number equalling 12. A final example is presented where the technique is applied to a three-dimensional steady supersonic flow, with a Mach number of 2 and an angle of attack of 5 deg.

  16. Radial {sup 32}P ion implantation using a coaxial plasma reactor: Activity imaging and numerical integration

    SciTech Connect

    Fortin, M.A.; Dufresne, V.; Paynter, R.; Sarkissian, A.; Stansfield, B.

    2004-12-01

    Beta-emitting biomedical implants are currently employed in angioplasty, in the treatment of certain types of cancers, and in the embolization of aneurysms with platinum coils. Radioisotopes such as {sup 32}P can be implanted using plasma-based ion implantation (PBII). In this article, we describe a reactor that was developed to implant radioisotopes into cylindrical metallic objects. The plasma first ionizes radioisotopes sputtered from a target, and then acts as the source of particles to be implanted into the biased biomedical device. The plasma therefore plays a major role in the ionization/implantation process. Following a sequence of implantation tests, the liners protecting the interior walls of the reactor were changed and the radioactivity on them measured. This study demonstrates that the radioactive deposits on these protective liners, adequately imaged by radiography, can indicate the distribution of the radioisotopes that are not implanted. The resulting maps give unique information about the activity distribution, which is influenced by the sputtering of the {sup 32}P-containing fragments, their ionization in the plasma, and also by the subsequent ion transport mechanisms. Such information can be interpreted and used to significantly improve the efficiency of the implantation procedure. Using a surface barrier detector, a comparative study established a relationship between the gray scale of radiographs of the liners, and activity measurements. An integration process allows the quantification of the activities on the walls and components of the reactor. Finally, the resulting integral of the {sup 32}P activity is correlated to the sum of the radioactivity amounts that were sputtered from radioactive targets inside the implanter before the dismantling procedure. This balance addresses the issue of security regarding PBII technology and confirms the confinement of the radioactivity inside the chamber.

  17. Integrated analysis of millisecond laser irradiation of steel by comprehensive optical diagnostics and numerical simulation

    NASA Astrophysics Data System (ADS)

    Doubenskaia, M.; Smurov, I.; Nagulin, K. Yu.

    2016-04-01

    Complimentary optical diagnostic tools are applied to provide comprehensive analysis of thermal phenomena in millisecond Nd:YAG laser irradiation of steel substrates. The following optical devices are employed: (a) infrared camera FLIR Phoenix RDASTM equipped by InSb sensor with 3 to 5 µm band pass arranged on 320 × 256 pixels array, (b) ultra-rapid camera Phantom V7.1 with SR-CMOS monochrome sensor in the visible spectral range, up to 105 frames per second for 64 × 88 pixels array, (c) original multi-wavelength pyrometer in the near-infrared range (1.370-1.531 µm). The following laser radiation parameters are applied: variation of energy per pulse in the range 15-30 J at a constant pulse duration of 10 ms with and without application of protective gas (Ar). The evolution of true temperature is restored based on the method of multi-colour pyrometry; by this way, melting/solidification dynamics is analysed. Emissivity variation with temperature is studied, and hysteresis type functional dependence is found. Variation of intensity of surface evaporation visualised by the camera Phantom V7.1 is registered and linked with the surface temperature evolution, different surface roughness and influence of protective gas atmosphere. Determination of the vapour plume temperature based on relatively intensities of spectral lines is done. The numerical simulation is carried out applying the thermal model with phase transitions taken into account.

  18. Integrated numerical design of an innovative Lower Hybrid launcher for Alcator C-Mod

    NASA Astrophysics Data System (ADS)

    Meneghini, O.; Shiraiwa, S.; Beck, W.; Irby, J.; Koert, P.; Parker, R. R.; Viera, R.; Wilson, J.; Wukitch, S.

    2009-11-01

    The new Alcator C-Mod LHCD system (LH2) is based on the concept of a four way splitter [1] which evenly splits the RF power among the four waveguides that compose one of the 16 columns of the LH grill. In this work several simulation tools have been used to study the LH2 coupling performance and the launched spectra when facing a plasma, numerically verifying the effectiveness of the four way splitter concept and further improving its design. The TOPLHA code has been used for modeling reflections at the antenna/plasma interface. TOPLHA results have been then coupled to the commercial code CST Microwave Studio to efficiently optimize the four way splitter geometry for several plasma scenarios. Subsequently, the COMSOL Multiphysics code has been used to self consistently take into account the electromagnetic-thermal-structural interactions. This comprehensive and predictive analysis has proven to be very valuable for understanding the behavior of the system when facing the plasma and has profoundly influenced several design choices of the LH2. According to the simulations, the final design ensures even poloidal power splitting for a wide range of plasma parameters, which ultimately results in an improvement of the wave coupling and an increased maximum operating power.

  19. Integrated numerical design of an innovative Lower Hybrid launcher for Alcator C-Mod

    SciTech Connect

    Meneghini, O.; Shiraiwa, S.; Beck, W.; Irby, J.; Koert, P.; Parker, R. R.; Viera, R.; Wukitch, S.; Wilson, J.

    2009-11-26

    The new Alcator C-Mod LHCD system (LH2) is based on the concept of a four way splitter [1] which evenly splits the RF power among the four waveguides that compose one of the 16 columns of the LH grill. In this work several simulation tools have been used to study the LH2 coupling performance and the launched spectra when facing a plasma, numerically verifying the effectiveness of the four way splitter concept and further improving its design. The TOPLHA code has been used for modeling reflections at the antenna/plasma interface. TOPLHA results have been then coupled to the commercial code CST Microwave Studio to efficiently optimize the four way splitter geometry for several plasma scenarios. Subsequently, the COMSOL Multiphysics code has been used to self consistently take into account the electromagnetic-thermal-structural interactions. This comprehensive and predictive analysis has proven to be very valuable for understanding the behavior of the system when facing the plasma and has profoundly influenced several design choices of the LH2. According to the simulations, the final design ensures even poloidal power splitting for a wide range of plasma parameters, which ultimately results in an improvement of the wave coupling and an increased maximum operating power.

  20. A robust and accurate formulation of molecular and colloidal electrostatics.

    PubMed

    Sun, Qiang; Klaseboer, Evert; Chan, Derek Y C

    2016-08-01

    This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics. PMID:27497538

  1. A robust and accurate formulation of molecular and colloidal electrostatics

    NASA Astrophysics Data System (ADS)

    Sun, Qiang; Klaseboer, Evert; Chan, Derek Y. C.

    2016-08-01

    This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics.

  2. A fully coupled regional atmospheric numerical model for integrated air quality and weather forecasting.

    NASA Astrophysics Data System (ADS)

    Freitas, S. R.; Longo, K. M.; Marecal, V.; Pirre, M.; Gmai, T.

    2012-04-01

    A new numerical modelling tool devoted to local and regional studies of atmospheric chemistry from surface to the lower stratosphere designed for both operational and research purposes will be presented. This model is based on the limited-area model CATT-BRAMS (Coupled Aerosol-Tracer Transport model to the Brazilian developments on the Regional Atmospheric Modeling System, Freitas et al. 2009, Longo et al. 2010) which is a meteorological model (BRAMS) including transport processes of gaseous and aerosols (CATT model). BRAMS is a version of the RAMS model (Walko et al. 2000) adapted to better represent tropical and subtropical processes and several new features. CATT-BRAMS has been used operationally at CPTEC (Brazilian Center for Weather Prediction and Climate Studies) since 2003 providing coupled weather and air quality forecast. In the Chemistry-CATT-BRAMS (called hereafter CCATT-BRAMS) a chemical module is fully coupled to the meteorological/tracer transport model CATT-BRAMS. This module includes gaseous chemistry, photochemistry, scavenging and dry deposition. The CCATT-BRAMS model takes advantages of the BRAMS specific development for the tropics/subtropics and of the recent availability of preprocessing tools for chemical mechanisms and of fast codes for photolysis rates. Similarly to BRAMS this model is conceived to run for horizontal resolutions ranging from a few meters to more than a hundred kilometres depending on the chosen scientific objective. In the last decade CCATT-BRAMS has being broadly (or extensively) used for applications mainly over South America, with strong emphasis over the Amazonia area and the main South American megacities. An overview of the model development and main applications will be presented.

  3. Integrating Geochemical and Geodynamic Numerical Models of Mantle Evolution and Plate Tectonics

    NASA Astrophysics Data System (ADS)

    Tackley, P. J.; Xie, S.

    2001-12-01

    The thermal and chemical evolution of Earth's mantle and plates are inextricably coupled by the plate tectonic - mantle convective system. Convection causes chemical differentiation, recycling and mixing, while chemical variations affect the convection through physical properties such as density and viscosity which depend on composition. It is now possible to construct numerical mantle convection models that track the thermo-chemical evolution of major and minor elements, and which can be used to test prospective models and hypotheses regarding Earth's chemical and thermal evolution. Model thermal and chemical structures can be compared to results from seismic tomography, while geochemical signatures (e.g., trace element ratios) can be compared to geochemical observations. The presented, two-dimensional model combines a simplified 2-component major element model with tracking of the most important trace elements, using a tracer method. Melting is self-consistently treated using a solidus, with melt placed on the surface as crust. Partitioning of trace elements occurs between melt and residue. Decaying heat-producing elements and secular cooling of the mantle and core provide the driving heat sources. Pseudo-plastic yielding of the lithosphere gives a first-order approximation of plate tectonics, and also allows planets with a rigid lid or intermittent plate tectonics to be modeled simply by increasing the yield strength. Preliminary models with an initially homogeneous mantle show that regions with a HIMU-like signature can be generated by crustal recycling, and regions with high 3He/4He ratios can be generated by residuum recycling. Outgassing of Argon is within the observed range. Models with initially layered mantles will also be investigated. In future it will be important to include a more realistic bulk compositional model that allows continental crust as well as oceanic crust to form, and to extend the model to three dimensions since toroidal flow may alter

  4. A numerical model of continental topographic evolution integrating thin sheet tectonics, river transport, and climate

    NASA Astrophysics Data System (ADS)

    Garcia-Castellanos, D.; Jimenez-Munt, I.

    2013-12-01

    How much does the erosion and sedimentation at the crust's surface influence on the patterns and distribution of tectonic deformation? This question has been mostly addressed from a numerical modelling perspective, at scales ranging from local to orogenic. Here we present a model that aims at constraining this phenomenon at the continental scale. With this purpose, we couple a thin-sheet viscous model of continental deformation with a stream-power surface transport model. The model also incorporates flexural isostatic compensation that permits the formation of large sedimentary foreland basins and a precipitation model that reproduces basic climatic effects such as continentality and orographic rainfall and rain shadow. We quantify the feedbacks between these 4 processes in a synthetic scenario inspired by the India-Asia collision. The model reproduces first-order characteristics of the growth of the Tibetan Plateau as a result of the Indian indentation. A large intramountain basin (comparable to the Tarim Basin) develops when predefining a hard inherited area in the undeformed foreland (Asia). The amount of sediment trapped in it is very sensitive to climatic parameters, particularly to evaporation, because it crucially determines its endorheic/exorheic drainage. We identify some degree of feedback between the deep and the surface processes occurs, leading locally to a <20% increase in deformation rates if orographic precipitation is account for (relative to a reference model with evenly-distributed precipitation). These enhanced thickening of the crust takes place particularly in areas of concentrated precipitation and steep slope, i.e., at the upwind flank of the growing plateau. This effect is particularly enhanced at the corners of the indenter (syntaxes). We hypothesize that this may provide clues for better understanding the mechanisms underlying the intriguing tectonic aneurisms documented in the syntaxes of the Himalayas.

  5. Runge-Kutta type methods with special properties for the numerical integration of ordinary differential equations

    NASA Astrophysics Data System (ADS)

    Kalogiratou, Z.; Monovasilis, Th.; Psihoyios, G.; Simos, T. E.

    2014-03-01

    In this work we review single step methods of the Runge-Kutta type with special properties. Among them are methods specially tuned to integrate problems that exhibit a pronounced oscillatory character and such problems arise often in celestial mechanics and quantum mechanics. Symplectic methods, exponentially and trigonometrically fitted methods, minimum phase-lag and phase-fitted methods are presented. These are Runge-Kutta, Runge-Kutta-Nyström and Partitioned Runge-Kutta methods. The theory of constructing such methods is given as well as several specific methods. In order to present the performance of the methods we have tested 58 methods from all categories. We consider the two dimensional harmonic oscillator, the two body problem, the pendulum problem and the orbital problem studied by Stiefel and Bettis. Also we have tested the methods on the computation of the eigenvalues of the one dimensional time independent Schrödinger equation with the harmonic oscillator, the doubly anharmonic oscillator and the exponential potentials.

  6. Accurate orbit propagation with planetary close encounters

    NASA Astrophysics Data System (ADS)

    Baù, Giulio; Milani Comparetti, Andrea; Guerra, Francesca

    2015-08-01

    We tackle the problem of accurately propagating the motion of those small bodies that undergo close approaches with a planet. The literature is lacking on this topic and the reliability of the numerical results is not sufficiently discussed. The high-frequency components of the perturbation generated by a close encounter makes the propagation particularly challenging both from the point of view of the dynamical stability of the formulation and the numerical stability of the integrator. In our approach a fixed step-size and order multistep integrator is combined with a regularized formulation of the perturbed two-body problem. When the propagated object enters the region of influence of a celestial body, the latter becomes the new primary body of attraction. Moreover, the formulation and the step-size will also be changed if necessary. We present: 1) the restarter procedure applied to the multistep integrator whenever the primary body is changed; 2) new analytical formulae for setting the step-size (given the order of the multistep, formulation and initial osculating orbit) in order to control the accumulation of the local truncation error and guarantee the numerical stability during the propagation; 3) a new definition of the region of influence in the phase space. We test the propagator with some real asteroids subject to the gravitational attraction of the planets, the Yarkovsky and relativistic perturbations. Our goal is to show that the proposed approach improves the performance of both the propagator implemented in the OrbFit software package (which is currently used by the NEODyS service) and of the propagator represented by a variable step-size and order multistep method combined with Cowell's formulation (i.e. direct integration of position and velocity in either the physical or a fictitious time).

  7. Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.; Braithwaite, David W.

    2016-01-01

    In this review, we attempt to integrate two crucial aspects of numerical development: learning the magnitudes of individual numbers and learning arithmetic. Numerical magnitude development involves gaining increasingly precise knowledge of increasing ranges and types of numbers: from non-symbolic to small symbolic numbers, from smaller to larger…

  8. Evaluation of Mixed-mode Integral Invariant for Polymer Material Trough The Couple Experimental-Numerical Process

    NASA Astrophysics Data System (ADS)

    Meite, M.; Pop, O.; Dubois, F.; Absi, J.

    2010-06-01

    Usually the element of real structures is subject of the mixed mode loadings. This fact can be explained by the elements geometry and the loading orientations. In this case the propagation of the eventual cracks is characterised by the mixed mode kinematics. In order to characterize the fracture process in mixed mode it’s necessary to separate the fracture process in order to evaluate the influence of each mode. Our study is limited to plane configurations. The mixed mode is considered as an association of opening and shear modes. The mixed mode fracture is evaluated trough the experimental tests using the SEN specimen for different mixed mode ratios. The fracture process separation is operated by the invariant integral Mθ. Moreover, our study regroups an experimental and a numerical approaches.

  9. Magnitude Knowledge: The Common Core of Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.

    2016-01-01

    The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: 1) representing increasingly precisely the magnitudes of non-symbolic…

  10. Magnitude Knowledge: The Common Core of Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.

    2016-01-01

    The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: (1) representing increasingly precisely the magnitudes of non-symbolic…

  11. An integrated numerical framework for water quality modelling in cold-region rivers: A case of the lower Athabasca River.

    PubMed

    Shakibaeinia, Ahmad; Kashyap, Shalini; Dibike, Yonas B; Prowse, Terry D

    2016-11-01

    There is a great deal of interest to determine the state and variations of water quality parameters in the lower Athabasca River (LAR) ecosystem, northern Alberta, Canada, due to industrial developments in the region. As a cold region river, the annual cycle of ice cover formation and breakup play a key role in water quality transformation and transportation processes. An integrated deterministic numerical modelling framework is developed and applied for long-term and detailed simulation of the state and variation (spatial and temporal) of major water quality constituents both in open-water and ice covered conditions in the lower Athabasca River (LAR). The framework is based on the a 1D and a 2D hydrodynamic and water quality models externally coupled with the 1D river ice process models to account for the cold season effects. The models are calibrated/validated using available measured data and applied for simulation of dissolved oxygen (DO) and nutrients (i.e., nitrogen and phosphorus). The results show the effect of winter ice cover on reducing the DO concentration, and a fluctuating temporal trend for DO and nutrients during summer periods with substantial differences in concentration between the main channel and flood plains. This numerical frame work can be the basis for future water quality scenario-based studies in the LAR. PMID:27376919

  12. Numerical experiment of transient and steady characteristics of ultrasonic-measurement-integrated simulation in three-dimensional blood flow analysis.

    PubMed

    Funamoto, Kenichi; Hayase, Toshiyuki; Saijo, Yoshifumi; Yambe, Tomoyuki

    2009-01-01

    In ultrasonic-measurement-integrated (UMI) simulation of blood flows, feedback signals proportional to the difference of velocity vector optimally estimated from Doppler velocities are applied in the feedback domain to reproduce the flow field. In this paper, we investigated the transient and steady characteristics of UMI simulation by numerical experiment. A steady standard numerical solution of a three-dimensional blood flow in an aneurysmal aorta was first defined with realistic boundary conditions. The UMI simulation was performed assuming that the realistic velocity profiles in the upstream and downstream boundaries were unknown but that the Doppler velocities of the standard solution were available in the aneurysmal domain or the feedback domain by virtual color Doppler imaging. The application of feedback in UMI simulation resulted in a computational result approach to the standard solution. As feedback gain increased, the error decreased faster and the steady error became smaller, implying the traceability to the standard solution improves. The positioning of ultrasound probes influenced the result. The height less than or equal to the aneurysm seemed better choice for UMI simulation using one probe. Increasing the velocity information by using multiple probes enhanced the UMI simulation by achieving ten times faster convergence and more reduction of error. PMID:19011966

  13. Fast and accurate propagation of coherent light

    PubMed Central

    Lewis, R. D.; Beylkin, G.; Monzón, L.

    2013-01-01

    We describe a fast algorithm to propagate, for any user-specified accuracy, a time-harmonic electromagnetic field between two parallel planes separated by a linear, isotropic and homogeneous medium. The analytical formulation of this problem (ca 1897) requires the evaluation of the so-called Rayleigh–Sommerfeld integral. If the distance between the planes is small, this integral can be accurately evaluated in the Fourier domain; if the distance is very large, it can be accurately approximated by asymptotic methods. In the large intermediate region of practical interest, where the oscillatory Rayleigh–Sommerfeld kernel must be applied directly, current numerical methods can be highly inaccurate without indicating this fact to the user. In our approach, for any user-specified accuracy ϵ>0, we approximate the kernel by a short sum of Gaussians with complex-valued exponents, and then efficiently apply the result to the input data using the unequally spaced fast Fourier transform. The resulting algorithm has computational complexity , where we evaluate the solution on an N×N grid of output points given an M×M grid of input samples. Our algorithm maintains its accuracy throughout the computational domain. PMID:24204184

  14. FeynDyn: A MATLAB program for fast numerical Feynman integral calculations for open quantum system dynamics on GPUs

    NASA Astrophysics Data System (ADS)

    Dattani, Nikesh S.

    2013-12-01

    language: MATLAB R2012a. Computer: See “Operating system”. Operating system: Any operating system that can run MATLAB R2007a or above. Classification: 4.4. Nature of problem: Calculating the dynamics of the reduced density operator of an open quantum system. Solution method: Numerical Feynman integral. Running time: Depends on the input parameters. See the main text for examples.

  15. Hydro-geophysical observations integration in numerical model: case study in Mediterranean karstic unsaturated zone (Larzac, france)

    NASA Astrophysics Data System (ADS)

    Champollion, Cédric; Fores, Benjamin; Le Moigne, Nicolas; Chéry, Jean

    2016-04-01

    Karstic hydro-systems are highly non-linear and heterogeneous but one of the main water resource in the Mediterranean area. Neither local measurements in boreholes or analysis at the spring can take into account the variability of the water storage. Since a few years, ground-based geophysical measurements (such as gravity, electrical resistivity or seismological data) allows following water storage in heterogeneous hydrosystems at an intermediate scale between boreholes and basin. Behind classical rigorous monitoring, the integration of geophysical data in hydrological numerical models in needed for both processes interpretation and quantification. Since a few years, a karstic geophysical observatory (GEK: Géodésie de l'Environnement Karstique, OSU OREME, SNO H+) has been setup in the Mediterranean area in the south of France. The observatory is surrounding more than 250m karstified dolomite, with an unsaturated zone of ~150m thickness. At the observatory water level in boreholes, evapotranspiration and rainfall are classical hydro-meteorological observations completed by continuous gravity, resistivity and seismological measurements. The main objective of the study is the modelling of the whole observation dataset by explicit unsaturated numerical model in one dimension. Hydrus software is used for the explicit modelling of the water storage and transfer and links the different observations (geophysics, water level, evapotranspiration) with the water saturation. Unknown hydrological parameters (permeability, porosity) are retrieved from stochastic inversions. The scale of investigation of the different observations are discussed thank to the modelling results. A sensibility study of the measurements against the model is done and key hydro-geological processes of the site are presented.

  16. Integrating a Numerical Taxonomic Method and Molecular Phylogeny for Species Delimitation of Melampsora Species (Melampsoraceae, Pucciniales) on Willows in China

    PubMed Central

    Zhao, Peng; Wang, Qing-Hong; Tian, Cheng-Ming; Kakishima, Makoto

    2015-01-01

    The species in genus Melampsora are the causal agents of leaf rust diseases on willows in natural habitats and plantations. However, the classification and recognition of species diversity are challenging because morphological characteristics are scant and morphological variation in Melampsora on willows has not been thoroughly evaluated. Thus, the taxonomy of Melampsora species on willows remains confused, especially in China where 31 species were reported based on either European or Japanese taxonomic systems. To clarify the species boundaries of Melampsora species on willows in China, we tested two approaches for species delimitation inferred from morphological and molecular variations. Morphological species boundaries were determined based on numerical taxonomic analyses of morphological characteristics in the uredinial and telial stages by cluster analysis and one-way analysis of variance. Phylogenetic species boundaries were delineated based on the generalized mixed Yule-coalescent (GMYC) model analysis of the sequences of the internal transcribed spacer (ITS1 and ITS2) regions including the 5.8S and D1/D2 regions of the large nuclear subunit of the ribosomal RNA gene. Numerical taxonomic analyses of 14 morphological characteristics recognized in the uredinial-telial stages revealed 22 morphological species, whereas the GMYC results recovered 29 phylogenetic species. In total, 17 morphological species were in concordance with the phylogenetic species and 5 morphological species were in concordance with 12 phylogenetic species. Both the morphological and molecular data supported 14 morphological characteristics, including 5 newly recognized characteristics and 9 traditionally emphasized characteristics, as effective for the differentiation of Melampsora species on willows in China. Based on the concordance and discordance of the two species delimitation approaches, we concluded that integrative taxonomy by using both morphological and molecular variations was

  17. Moving Toward Integrating Gene Expression Profiling Into High-Throughput Testing: A Gene Expression Biomarker Accurately Predicts Estrogen Receptor α Modulation in a Microarray Compendium.

    PubMed

    Ryan, Natalia; Chorley, Brian; Tice, Raymond R; Judson, Richard; Corton, J Christopher

    2016-05-01

    Microarray profiling of chemical-induced effects is being increasingly used in medium- and high-throughput formats. Computational methods are described here to identify molecular targets from whole-genome microarray data using as an example the estrogen receptor α (ERα), often modulated by potential endocrine disrupting chemicals. ERα biomarker genes were identified by their consistent expression after exposure to 7 structurally diverse ERα agonists and 3 ERα antagonists in ERα-positive MCF-7 cells. Most of the biomarker genes were shown to be directly regulated by ERα as determined by ESR1 gene knockdown using siRNA as well as through chromatin immunoprecipitation coupled with DNA sequencing analysis of ERα-DNA interactions. The biomarker was evaluated as a predictive tool using the fold-change rank-based Running Fisher algorithm by comparison to annotated gene expression datasets from experiments using MCF-7 cells, including those evaluating the transcriptional effects of hormones and chemicals. Using 141 comparisons from chemical- and hormone-treated cells, the biomarker gave a balanced accuracy for prediction of ERα activation or suppression of 94% and 93%, respectively. The biomarker was able to correctly classify 18 out of 21 (86%) ER reference chemicals including "very weak" agonists. Importantly, the biomarker predictions accurately replicated predictions based on 18 in vitro high-throughput screening assays that queried different steps in ERα signaling. For 114 chemicals, the balanced accuracies were 95% and 98% for activation or suppression, respectively. These results demonstrate that the ERα gene expression biomarker can accurately identify ERα modulators in large collections of microarray data derived from MCF-7 cells. PMID:26865669

  18. Inter-annual variability of air pollutants over East Asia: an integrated analysis using satellite, lidar and numerical model.

    NASA Astrophysics Data System (ADS)

    Yumimoto, K.; Uno, I.; Kuribayashi, M.; Miyazaki, K.; Nishizawa, T.

    2014-12-01

    Air quality in East Asia has a drastic temporal and spatial variability. The rapid economic growth in the last three decades enhanced the increase of anthropogenic emission of air pollutions, and caused deterioration of the air quality in both source and downwind regions. The unprecedented heavy PM­2.5 pollution over the central China in January 2013 records the maximum PM2.5 concentration of 996 μg/m3 and raised critical environmental issues (e.g., mortality, human health, social activity and trans-boundary transport, etc.). Recently, efforts to reduce anthropogenic emissions (e.g., emission regulations and improvements of emission factors and removal efficiencies) decelerate their growth rates. In fact, Asian SO2 emission is estimated to be reducing from 2007 [Kurokawa et al., 2013]. However, growth rates other pollutant emissions (e.g., NOx and PM10) still remain in high. To understand the life cycle of pollutants (emission, transport, reaction and deposition) and their temporal and spatial variation, an integrated analysis using observation and numerical model (chemical transport model; CTM) is useful. In this study, we installed a comprehensive observation operator system, which converts model results into observed variables, into the GEOS-Chem CTM. A long-term (2005-2013) full-chemistry simulation over East Asia was performed, and simulation results are translated to tropospheric NO2 and SO2 columns and vertical profiles of aerosol extinction coefficient equivalent to satellite measurements and in-situ lidar network observations. Combining CTM and observations, and integrating analyses of aerosols over the downwind region and their precursors over the source region will provide important insights into temporal and spatial variation of air pollutants over East Asia.

  19. Development of an Integrated Thermocouple for the Accurate Sample Temperature Measurement During High Temperature Environmental Scanning Electron Microscopy (HT-ESEM) Experiments.

    PubMed

    Podor, Renaud; Pailhon, Damien; Ravaux, Johann; Brau, Henri-Pierre

    2015-04-01

    We have developed two integrated thermocouple (TC) crucible systems that allow precise measurement of sample temperature when using a furnace associated with an environmental scanning electron microscope (ESEM). Sample temperatures measured with these systems are precise (±5°C) and reliable. The TC crucible systems allow working with solids and liquids (silicate melts or ionic liquids), independent of the gas composition and pressure. These sample holder designs will allow end users to perform experiments at high temperature in the ESEM chamber with high precision control of the sample temperature. PMID:25898837

  20. Integrated analysis of numerous heterogeneous gene expression profiles for detecting robust disease-specific biomarkers and proposing drug targets

    PubMed Central

    Amar, David; Hait, Tom; Izraeli, Shai; Shamir, Ron

    2015-01-01

    Genome-wide expression profiling has revolutionized biomedical research; vast amounts of expression data from numerous studies of many diseases are now available. Making the best use of this resource in order to better understand disease processes and treatment remains an open challenge. In particular, disease biomarkers detected in case–control studies suffer from low reliability and are only weakly reproducible. Here, we present a systematic integrative analysis methodology to overcome these shortcomings. We assembled and manually curated more than 14 000 expression profiles spanning 48 diseases and 18 expression platforms. We show that when studying a particular disease, judicious utilization of profiles from other diseases and information on disease hierarchy improves classification quality, avoids overoptimistic evaluation of that quality, and enhances disease-specific biomarker discovery. This approach yielded specific biomarkers for 24 of the analyzed diseases. We demonstrate how to combine these biomarkers with large-scale interaction, mutation and drug target data, forming a highly valuable disease summary that suggests novel directions in disease understanding and drug repurposing. Our analysis also estimates the number of samples required to reach a desired level of biomarker stability. This methodology can greatly improve the exploitation of the mountain of expression profiles for better disease analysis. PMID:26261215

  1. Numerical Prediction of the Performance of Integrated Planar Solid-Oxide Fuel Cells, with Comparisons of Results from Several Codes

    SciTech Connect

    G. L. Hawkes; J. E. O'Brien; B. A. Haberman; A. J. Marquis; C. M. Baca; D. Tripepi; P. Costamagna

    2008-06-01

    A numerical study of the thermal and electrochemical performance of a single-tube Integrated Planar Solid Oxide Fuel Cell (IP-SOFC) has been performed. Results obtained from two finite-volume computational fluid dynamics (CFD) codes FLUENT and SOHAB and from a two-dimensional inhouse developed finite-volume GENOA model are presented and compared. Each tool uses physical and geometric models of differing complexity and comparisons are made to assess their relative merits. Several single-tube simulations were run using each code over a range of operating conditions. The results include polarization curves, distributions of local current density, composition and temperature. Comparisons of these results are discussed, along with their relationship to the respective imbedded phenomenological models for activation losses, fluid flow and mass transport in porous media. In general, agreement between the codes was within 15% for overall parameters such as operating voltage and maximum temperature. The CFD results clearly show the effects of internal structure on the distributions of gas flows and related quantities within the electrochemical cells.

  2. A walk through energy, discrepancy, numerical integration and group invariant measures on measurable subsets of euclidean space

    NASA Astrophysics Data System (ADS)

    Damelin, S.

    2008-07-01

    (A) The celebrated Gaussian quadrature formula on finite intervals tells us that the Gauss nodes are the zeros of the unique solution of an extremal problem. We announce recent results of Damelin, Grabner, Levesley, Ragozin and Sun which derive quadrature estimates on compact, homogenous manifolds embedded in Euclidean spaces, via energy functionals associated with a class of group-invariant kernels which are generalizations of zonal kernels on the spheres or radial kernels in euclidean spaces. Our results apply, in particular, to weighted Riesz kernels defined on spheres and certain projective spaces. Our energy functionals describe both uniform and perturbed uniform distribution of quadrature point sets. (B) Given , some measurable subset of Euclidean space, one sometimes wants to construct, a design, a finite set of points, , with a small energy or discrepancy. We announce recent results of Damelin, Hickernell, Ragozin and Zeng which show that these two measures of quality are equivalent when they are defined via positive definite kernels . The error of approximating the integral by the sample average of f over has a tight upper bound in terms the energy or discrepancy of . The tightness of this error bound follows by requiring f to lie in the Hilbert space with reproducing kernel K. The theory presented here provides an interpretation of the best design for numerical integration as one with minimum energy, provided that the μ defining the integration problem is the equilibrium measure or charge distribution corresponding to the energy kernel, K. (C) Let be the orbit of a compact, possibly non Abelian group, , acting as measurable transformations of and the kernel K is invariant under the group action. We announce recent results of Damelin, Hickernell, Ragozin and Zeng which show that the equilibrium measure is the normalized measure on induced by Haar measure on . This allows us to calculate explicit representations of equilibrium measures. There is an

  3. 3D Numerical Optimization Modelling of Ivancich landslides (Assisi, Italy) via integration of remote sensing and in situ observations.

    NASA Astrophysics Data System (ADS)

    Castaldo, Raffaele; De Novellis, Vincenzo; Lollino, Piernicola; Manunta, Michele; Tizzani, Pietro

    2015-04-01

    The new challenge that the research in slopes instabilities phenomena is going to tackle is the effective integration and joint exploitation of remote sensing measurements with in situ data and observations to study and understand the sub-surface interactions, the triggering causes, and, in general, the long term behaviour of the investigated landslide phenomenon. In this context, a very promising approach is represented by Finite Element (FE) techniques, which allow us to consider the intrinsic complexity of the mass movement phenomena and to effectively benefit from multi source observations and data. In this context, we perform a three dimensional (3D) numerical model of the Ivancich (Assisi, Central Italy) instability phenomenon. In particular, we apply an inverse FE method based on a Genetic Algorithm optimization procedure, benefitting from advanced DInSAR measurements, retrieved through the full resolution Small Baseline Subset (SBAS) technique, and an inclinometric array distribution. To this purpose we consider the SAR images acquired from descending orbit by the COSMO-SkyMed (CSK) X-band radar constellation, from December 2009 to February 2012. Moreover the optimization input dataset is completed by an array of eleven inclinometer measurements, from 1999 to 2006, distributed along the unstable mass. The landslide body is formed of debris material sliding on a arenaceous marl substratum, with a thin shear band detected using borehole and inclinometric data, at depth ranging from 20 to 60 m. Specifically, we consider the active role of this shear band in the control of the landslide evolution process. A large field monitoring dataset of the landslide process, including at-depth piezometric and geological borehole observations, were available. The integration of these datasets allows us to develop a 3D structural geological model of the considered slope. To investigate the dynamic evolution of a landslide, various physical approaches can be considered

  4. Accurate derivative evaluation for any Grad-Shafranov solver

    NASA Astrophysics Data System (ADS)

    Ricketson, L. F.; Cerfon, A. J.; Rachh, M.; Freidberg, J. P.

    2016-01-01

    We present a numerical scheme that can be combined with any fixed boundary finite element based Poisson or Grad-Shafranov solver to compute the first and second partial derivatives of the solution to these equations with the same order of convergence as the solution itself. At the heart of our scheme is an efficient and accurate computation of the Dirichlet to Neumann map through the evaluation of a singular volume integral and the solution to a Fredholm integral equation of the second kind. Our numerical method is particularly useful for magnetic confinement fusion simulations, since it allows the evaluation of quantities such as the magnetic field, the parallel current density and the magnetic curvature with much higher accuracy than has been previously feasible on the affordable coarse grids that are usually implemented.

  5. Numerical modeling of the 3D dynamics of ultrasound contrast agent microbubbles using the boundary integral method

    NASA Astrophysics Data System (ADS)

    Wang, Qianxi; Manmi, Kawa; Calvisi, Michael L.

    2015-02-01

    Ultrasound contrast agents (UCAs) are microbubbles stabilized with a shell typically of lipid, polymer, or protein and are emerging as a unique tool for noninvasive therapies ranging from gene delivery to tumor ablation. While various models have been developed to describe the spherical oscillations of contrast agents, the treatment of nonspherical behavior has received less attention. However, the nonspherical dynamics of contrast agents are thought to play an important role in therapeutic applications, for example, enhancing the uptake of therapeutic agents across cell membranes and tissue interfaces, and causing tissue ablation. In this paper, a model for nonspherical contrast agent dynamics based on the boundary integral method is described. The effects of the encapsulating shell are approximated by adapting Hoff's model for thin-shell, spherical contrast agents. A high-quality mesh of the bubble surface is maintained by implementing a hybrid approach of the Lagrangian method and elastic mesh technique. The numerical model agrees well with a modified Rayleigh-Plesset equation for encapsulated spherical bubbles. Numerical analyses of the dynamics of UCAs in an infinite liquid and near a rigid wall are performed in parameter regimes of clinical relevance. The oscillation amplitude and period decrease significantly due to the coating. A bubble jet forms when the amplitude of ultrasound is sufficiently large, as occurs for bubbles without a coating; however, the threshold amplitude required to incite jetting increases due to the coating. When a UCA is near a rigid boundary subject to acoustic forcing, the jet is directed towards the wall if the acoustic wave propagates perpendicular to the boundary. When the acoustic wave propagates parallel to the rigid boundary, the jet direction has components both along the wave direction and towards the boundary that depend mainly on the dimensionless standoff distance of the bubble from the boundary. In all cases, the jet

  6. The impact of watershed management on coastal morphology: A case study using an integrated approach and numerical modeling

    NASA Astrophysics Data System (ADS)

    Samaras, Achilleas G.; Koutitas, Christopher G.

    2014-04-01

    Coastal morphology evolves as the combined result of both natural- and human- induced factors that cover a wide range of spatial and temporal scales of effect. Areas in the vicinity of natural stream mouths are of special interest, as the direct connection with the upstream watershed extends the search for drivers of morphological evolution from the coastal area to the inland as well. Although the impact of changes in watersheds on the coastal sediment budget is well established, references that study concurrently the two fields and the quantification of their connection are scarce. In the present work, the impact of land-use changes in a watershed on coastal erosion is studied for a selected site in North Greece. Applications are based on an integrated approach to quantify the impact of watershed management on coastal morphology through numerical modeling. The watershed model SWAT and a shoreline evolution model developed by the authors (PELNCON-M) are used, evaluating with the latter the performance of the three longshore sediment transport rate formulae included in the model formulation. Results document the impact of crop abandonment on coastal erosion (agricultural land decrease from 23.3% to 5.1% is accompanied by the retreat of ~ 35 m in the vicinity of the stream mouth) and show the effect of sediment transport formula selection on the evolution of coastal morphology. Analysis denotes the relative importance of the parameters involved in the dynamics of watershed-coast systems, and - through the detailed description of a case study - is deemed to provide useful insights for researchers and policy-makers involved in their study.

  7. Experimental validation of numerical study on thermoelectric-based heating in an integrated centrifugal microfluidic platform for polymerase chain reaction amplification.

    PubMed

    Amasia, Mary; Kang, Seok-Won; Banerjee, Debjyoti; Madou, Marc

    2013-01-01

    A comprehensive study involving numerical analysis and experimental validation of temperature transients within a microchamber was performed for thermocycling operation in an integrated centrifugal microfluidic platform for polymerase chain reaction (PCR) amplification. Controlled heating and cooling of biological samples are essential processes in many sample preparation and detection steps for micro-total analysis systems. Specifically, the PCR process relies on highly controllable and uniform heating of nucleic acid samples for successful and efficient amplification. In these miniaturized systems, the heating process is often performed more rapidly, making the temperature control more difficult, and adding complexity to the integrated hardware system. To gain further insight into the complex temperature profiles within the PCR microchamber, numerical simulations using computational fluid dynamics and computational heat transfer were performed. The designed integrated centrifugal microfluidics platform utilizes thermoelectrics for ice-valving and thermocycling for PCR amplification. Embedded micro-thermocouples were used to record the static and dynamic thermal responses in the experiments. The data collected was subsequently used for computational validation of the numerical predictions for the system response during thermocycling, and these simulations were found to be in agreement with the experimental data to within ∼97%. When thermal contact resistance values were incorporated in the simulations, the numerical predictions were found to be in agreement with the experimental data to within ∼99.9%. This in-depth numerical modeling and experimental validation of a complex single-sided heating platform provide insights into hardware and system design for multi-layered polymer microfluidic systems. In addition, the biological capability along with the practical feasibility of the integrated system is demonstrated by successfully performing PCR amplification of

  8. Experimental validation of numerical study on thermoelectric-based heating in an integrated centrifugal microfluidic platform for polymerase chain reaction amplification

    PubMed Central

    Amasia, Mary; Kang, Seok-Won; Banerjee, Debjyoti; Madou, Marc

    2013-01-01

    A comprehensive study involving numerical analysis and experimental validation of temperature transients within a microchamber was performed for thermocycling operation in an integrated centrifugal microfluidic platform for polymerase chain reaction (PCR) amplification. Controlled heating and cooling of biological samples are essential processes in many sample preparation and detection steps for micro-total analysis systems. Specifically, the PCR process relies on highly controllable and uniform heating of nucleic acid samples for successful and efficient amplification. In these miniaturized systems, the heating process is often performed more rapidly, making the temperature control more difficult, and adding complexity to the integrated hardware system. To gain further insight into the complex temperature profiles within the PCR microchamber, numerical simulations using computational fluid dynamics and computational heat transfer were performed. The designed integrated centrifugal microfluidics platform utilizes thermoelectrics for ice-valving and thermocycling for PCR amplification. Embedded micro-thermocouples were used to record the static and dynamic thermal responses in the experiments. The data collected was subsequently used for computational validation of the numerical predictions for the system response during thermocycling, and these simulations were found to be in agreement with the experimental data to within ∼97%. When thermal contact resistance values were incorporated in the simulations, the numerical predictions were found to be in agreement with the experimental data to within ∼99.9%. This in-depth numerical modeling and experimental validation of a complex single-sided heating platform provide insights into hardware and system design for multi-layered polymer microfluidic systems. In addition, the biological capability along with the practical feasibility of the integrated system is demonstrated by successfully performing PCR amplification of

  9. Study on the properties of the Integrated Precipitable Water (IPW) maps derived by GPS, SAR interferometry and numerical forecasting models

    NASA Astrophysics Data System (ADS)

    Mateus, Pedro; Nico, Giovanni; Tomé, Ricardo; Catalão, João.; Miranda, Pedro

    2010-05-01

    The knowledge of spatial distribution of relative changes in atmospheric Integrated Precipitable Water (IPW) density is important for climate studies and numerical weather forecasting. An increase (or decrease) of the IPW density affects the phase of electromagnetic waves. For this reason, this quantity can be measured by techniques such as GPS and space-borne SAR interferometry (InSAR). The aim of this work is to study the isotropic properties of the IPW maps obtained by GPS and SAR InSAR measurements and derived by a Numerical Weather Forecasting Model. The existence of a power law in their phase spectrum is verified. The relationship between the interferometric phase delay and the topographic height of the observed area is also investigated. The Lisbon region, Portugal, was chosen as a study area. This region is monitored by a network of GPS permanent stations covering an area of about squared kilometers. The network consists of 12 GPS stations of which 4 belonging to the Instituto Geográfico Português (IGP) and 8 to Instituto Geográfico do Exercito (IGEOE). All stations were installed between 1997 and the beginning of 2009. The GAMIT package was used to process GPS data and to estimate the total zenith delay with a temporal sampling of 15 minutes. A set of 25 SAR interferograms with a 35-day temporal baseline were processed using ASAR-ENVISAT data acquired over the Lisbon region during the period from 2003 to 2005 and from 2008 to 2009. These interferograms give an estimate of the variation of the global atmospheric delay. Terrain deformations related to known geological phenomena in the Lisbon area are negligible at this time scale of 35 days. Furthermore, two interferometric SAR images acquired by ERS-1/2 over the Lisbon region on 20/07/1995 and 21/07/1995, respectively, and so with a temporal baseline of just 1 day, were also processed. The Weather Research & Forecasting Model (WRF) was used to generate the three-dimensional fields of temperature

  10. Integrated numerical modeling of a landslide early warning system in a context of adaptation to future climatic pressures

    NASA Astrophysics Data System (ADS)

    Khabarov, Nikolay; Huggel, Christian; Obersteiner, Michael; Ramírez, Juan Manuel

    2010-05-01

    Mountain regions are typically characterized by rugged terrain which is susceptible to different types of landslides during high-intensity precipitation. Landslides account for billions of dollars of damage and many casualties, and are expected to increase in frequency in the future due to a projected increase of precipitation intensity. Early warning systems (EWS) are thought to be a primary tool for related disaster risk reduction and climate change adaptation to extreme climatic events and hydro-meteorological hazards, including landslides. An EWS for hazards such as landslides consist of different components, including environmental monitoring instruments (e.g. rainfall or flow sensors), physical or empirical process models to support decision-making (warnings, evacuation), data and voice communication, organization and logistics-related procedures, and population response. Considering this broad range, EWS are highly complex systems, and it is therefore difficult to understand the effect of the different components and changing conditions on the overall performance, ultimately being expressed as human lives saved or structural damage reduced. In this contribution we present a further development of our approach to assess a landslide EWS in an integral way, both at the system and component level. We utilize a numerical model using 6 hour rainfall data as basic input. A threshold function based on a rainfall-intensity/duration relation was applied as a decision criterion for evacuation. Damage to infrastructure and human lives was defined as a linear function of landslide magnitude, with the magnitude modelled using a power function of landslide frequency. Correct evacuation was assessed with a ‘true' reference rainfall dataset versus a dataset of artificially reduced quality imitating the observation system component. Performance of the EWS using these rainfall datasets was expressed in monetary terms (i.e. damage related to false and correct evacuation). We

  11. Theoretical, experimental and numerical methods for investigating the characteristics of laser radiation scattered in the integrated-optical waveguide with three-dimensional irregularities

    SciTech Connect

    Egorov, Alexander A

    2011-07-31

    We consider theoretical, experimental and numerical methods which make it possible to analyse the key characteristics of laser radiation scattered in the integrated-optical waveguide with three-dimensional irregularities. The main aspects of the three-dimensional vector electrodynamic problem of waveguide scattering are studied. The waveguide light scattering method is presented and its main advantages over the methods of single scattering of laser radiation are discussed. The experimental setup and results of measurements are described. Theoretical and experimental results confirming the validity of the vector theory of three-dimensional waveguide scattering of laser radiation developed by the author are compared for the first time. (fiber and integrated optics)

  12. Numerical Evaluation of the "Dual-Kernel Counter-flow" Matric Convolution Integral that Arises in Discrete/Continuous (D/C) Control Theory

    NASA Technical Reports Server (NTRS)

    Nixon, Douglas D.

    2009-01-01

    Discrete/Continuous (D/C) control theory is a new generalized theory of discrete-time control that expands the concept of conventional (exact) discrete-time control to create a framework for design and implementation of discretetime control systems that include a continuous-time command function generator so that actuator commands need not be constant between control decisions, but can be more generally defined and implemented as functions that vary with time across sample period. Because the plant/control system construct contains two linear subsystems arranged in tandem, a novel dual-kernel counter-flow convolution integral appears in the formulation. As part of the D/C system design and implementation process, numerical evaluation of that integral over the sample period is required. Three fundamentally different evaluation methods and associated algorithms are derived for the constant-coefficient case. Numerical results are matched against three available examples that have closed-form solutions.

  13. Semi-Lagrangian integration of a gridpoint shallow water model on the sphere. [for numerical weather prediction

    NASA Technical Reports Server (NTRS)

    Mcdonald, A.; Bates, J. R.

    1989-01-01

    A stable, semi-Lagrangian, semi-implicit, two-time-level, gridpoint integration scheme for the shallow water equations on the sphere is presented. A rotated spherical coordinate system is used to integrate the equations of motion at each gridpoint poleward of a certain latitude, thus overcoming problems associated with the polar singularity. The results of medium term integrations of large scale test patterns using a long time step are presented.

  14. Kinlsq: a program for fitting kinetics data with numerically integrated rate equations and its application to the analysis of slow, tight-binding inhibition data.

    PubMed

    Gutheil, W G; Kettner, C A; Bachovchin, W W

    1994-11-15

    Kinlsq, a Matlab-based computer program for the least-squares fitting of parameters to kinetics data described by numerically integrated rate equations, is described, and three applications to the analysis of enzyme kinetics data are given. The first application was to the analysis of a simple bimolecular enzyme plus inhibitor binding curve. The kinlsq fit to these data was essentially identical to that obtained with the corresponding analytically integrated rate equation, validating kinlsq. The second application was to the fit of a numerically integrated Michaelis-Menten model to the progress curve for dipeptidyl peptidase IV-catalyzed hydrolysis of Ala-Pro-p-nitroanilide as a demonstration of the analysis of steady-state enzyme kinetics data. The results obtained with kinlsq were compared with the results obtained by fitting this time course with the integrated Michaelis-Menten equation, and with the results obtained by fitting the (S,dP/dt) transform of the data with the Michaelis-Menten equation. The third application was to the analysis of the inhibition of chymotrypsin by the slow, tight-binding inhibitor MeOSuc-Ala-Ala-Pro-boroPhe, data not readily amenable to other methods of analysis. These applications demonstrate how kinlsq can be used to fit rate constants, equilibrium constants, steady-state constants, and the stoichiometric relationships between components. PMID:7695087

  15. GO2OGS: a versatile workflow to integrate complex geological information with fault data into numerical simulation models

    NASA Astrophysics Data System (ADS)

    Fischer, T.; Walther, M.; Sattler, S.; Naumov, D.; Kolditz, O.

    2015-08-01

    We offer a versatile workflow to convert geological models built with the software Paradigm™ GOCAD© into the open-source VTU format for the usage in numerical simulation models. Tackling relevant scientific questions or engineering tasks often involves multidisciplinary approaches. Conversion workflows are needed as a way of communication between the diverse tools of the various disciplines. Our approach offers an open-source, platform independent, robust, and comprehensible method that is potentially useful for a multitude of similar environmental studies. With two application examples in the Thuringian Syncline, we show how a heterogeneous geological GOCAD model including multiple layers and faults can be used for numerical groundwater flow modelling. The presented workflow offers the chance to incorporate increasingly detailed data, utilizing growing availability of computational power to simulate numerical models.

  16. Numerical Simulation Of Buckling In Waffle Plants

    NASA Technical Reports Server (NTRS)

    Yin, Dah N.; Tran, Vu M.

    1990-01-01

    Accurate results obtained when fillet radii considered. Two reports describe numerical and experimental study of application of PASCO and WAFFLE computer programs to analysis of buckling in integrally machined, biaxially stiffened panel. PASCO (Panal Analysis and Sizing Code) is finite-element stress-and-strain code written for analysis and sizing of uniaxially stiffened panels. WAFFLE program provides comprehensive stress analysis of waffle panel, used to determine bending moments at interfaces.

  17. Numerical Simulations of Ion Cloud Dynamics

    NASA Astrophysics Data System (ADS)

    Sillitoe, Nicolas; Hilico, Laurent

    We explain how to perform accurate numerical simulations of ion cloud dynamics by discussing the relevant orders of magnitude of the characteristic times and frequencies involved in the problem and the computer requirement with respect to the ion cloud size. We then discuss integration algorithms and Coulomb force parallelization. We finally explain how to take into account collisions, cooling laser interaction and chemical reactions in a Monte Carlo approach and discuss how to use random number generators to that end.

  18. Numerically induced bursting in a set of coupled neuronal oscillators

    NASA Astrophysics Data System (ADS)

    Medetov, Bekbolat; Weiß, R. Gregor; Zhanabaev, Zeinulla Zh.; Zaks, Michael A.

    2015-03-01

    We present our numerical observations on dynamics in the system of two linearly coupled FitzHugh-Nagumo oscillators close to the destabilization of the state of rest. Under the considered parameter values the system, if integrated sufficiently accurately, converges to small-scale periodic oscillations. However, minor numerical inaccuracies, which occur already at the default precision of the standard Runge-Kutta solver, lead to a breakup of periodicity and an onset of large-scale aperiodic bursting.

  19. Integral quantification of contaminant mass flow rates in a contaminated aquifer: conditioning of the numerical inversion of concentration-time series.

    PubMed

    Herold, Maria; Ptak, Thomas; Bayer-Raich, Marti; Wendel, Thomas; Grathwohl, Peter

    2009-04-15

    A series of integral pumping tests (IPTs) has been conducted at a former gasworks site to quantify the contaminant mass flow rates and average concentration in groundwater along three control planes across the groundwater flow direction. The measured concentration-time series were analysed numerically with the help of the inversion code CSTREAM and a flow and transport model representing the highly heterogeneous aquifer. Since the control planes cover the entire downstream width of the potentially contaminated area, they allow conclusions to be drawn about the current location and spread of the contaminant plume. Previous evaluations of integral pumping tests could calculate three scenarios concerning the spread of the plume around the IPT well: (i) the plume is located to the right of the pumping well, (ii) to the left, or (iii) is distributed symmetrically around it. To create a more realistic picture of the plume position, a series of direct-push monitoring wells were installed along one control plane. The concentrations found in these wells were included in the numerical analysis to condition the numerical inversion results, and allowed the identification of a more pronounced plume centre and fringe, which supports the development of optimised remediation strategies. PMID:19167131

  20. Integral quantification of contaminant mass flow rates in a contaminated aquifer: Conditioning of the numerical inversion of concentration-time series

    NASA Astrophysics Data System (ADS)

    Herold, Maria; Ptak, Thomas; Bayer-Raich, Marti; Wendel, Thomas; Grathwohl, Peter

    2009-04-01

    A series of integral pumping tests (IPTs) has been conducted at a former gasworks site to quantify the contaminant mass flow rates and average concentration in groundwater along three control planes across the groundwater flow direction. The measured concentration-time series were analysed numerically with the help of the inversion code CSTREAM and a flow and transport model representing the highly heterogeneous aquifer. Since the control planes cover the entire downstream width of the potentially contaminated area, they allow conclusions to be drawn about the current location and spread of the contaminant plume. Previous evaluations of integral pumping tests could calculate three scenarios concerning the spread of the plume around the IPT well: (i) the plume is located to the right of the pumping well, (ii) to the left, or (iii) is distributed symmetrically around it. To create a more realistic picture of the plume position, a series of direct-push monitoring wells were installed along one control plane. The concentrations found in these wells were included in the numerical analysis to condition the numerical inversion results, and allowed the identification of a more pronounced plume centre and fringe, which supports the development of optimised remediation strategies.

  1. An integrated approach to flood hazard assessment on alluvial fans using numerical modeling, field mapping, and remote sensing

    USGS Publications Warehouse

    Pelletier, J.D.; Mayer, L.; Pearthree, P.A.; House, P.K.; Demsey, K.A.; Klawon, J.K.; Vincent, K.R.

    2005-01-01

    Millions of people in the western United States live near the dynamic, distributary channel networks of alluvial fans where flood behavior is complex and poorly constrained. Here we test a new comprehensive approach to alluvial-fan flood hazard assessment that uses four complementary methods: two-dimensional raster-based hydraulic modeling, satellite-image change detection, fieldbased mapping of recent flood inundation, and surficial geologic mapping. Each of these methods provides spatial detail lacking in the standard method and each provides critical information for a comprehensive assessment. Our numerical model simultaneously solves the continuity equation and Manning's equation (Chow, 1959) using an implicit numerical method. It provides a robust numerical tool for predicting flood flows using the large, high-resolution Digital Elevation Models (DEMs) necessary to resolve the numerous small channels on the typical alluvial fan. Inundation extents and flow depths of historic floods can be reconstructed with the numerical model and validated against field- and satellite-based flood maps. A probabilistic flood hazard map can also be constructed by modeling multiple flood events with a range of specified discharges. This map can be used in conjunction with a surficial geologic map to further refine floodplain delineation on fans. To test the accuracy of the numerical model, we compared model predictions of flood inundation and flow depths against field- and satellite-based flood maps for two recent extreme events on the southern Tortolita and Harquahala piedmonts in Arizona. Model predictions match the field- and satellite-based maps closely. Probabilistic flood hazard maps based on the 10 yr, 100 yr, and maximum floods were also constructed for the study areas using stream gage records and paleoflood deposits. The resulting maps predict spatially complex flood hazards that strongly reflect small-scale topography and are consistent with surficial geology. In

  2. Data, models, and views: towards integration of diverse numerical model components and data sets for scientific and public dissemination

    NASA Astrophysics Data System (ADS)

    Hofmeister, Richard; Lemmen, Carsten; Nasermoaddeli, Hassan; Klingbeil, Knut; Wirtz, Kai

    2015-04-01

    Data and models for describing coastal systems span a diversity of disciplines, communities, ecosystems, regions and techniques. Previous attempts of unifying data exchange, coupling interfaces, or metadata information have not been successful. We introduce the new Modular System for Shelves and Coasts (MOSSCO, http://www.mossco.de), a novel coupling framework that enables the integration of a diverse array of models and data from different disciplines relating to coastal research. In the MOSSCO concept, the integrating framework imposes very few restrictions on contributed data or models; in fact, there is no distinction made between data and models. The few requirements are: (1) principle coupleability, i.e. access to I/O and timing information in submodels, which has recently been referred to as the Basic Model Interface (BMI) (2) open source/open data access and licencing and (3) communication of metadata, such as spatiotemporal information, naming conventions, and physical units. These requirements suffice to integrate different models and data sets into the MOSSCO infrastructure and subsequently built a modular integrated modeling tool that can span a diversity of processes and domains. We demonstrate how diverse coastal system constituents were integrated into this modular framework and how we deal with the diverging development of constituent data sets and models at external institutions. Finally, we show results from simulations with the fully coupled system using OGC WebServices in the WiMo geoportal (http://kofserver3.hzg.de/wimo), from where stakeholders can view the simulation results for further dissemination.

  3. Grading More Accurately

    ERIC Educational Resources Information Center

    Rom, Mark Carl

    2011-01-01

    Grades matter. College grading systems, however, are often ad hoc and prone to mistakes. This essay focuses on one factor that contributes to high-quality grading systems: grading accuracy (or "efficiency"). I proceed in several steps. First, I discuss the elements of "efficient" (i.e., accurate) grading. Next, I present analytical results…

  4. A control volume method on an icosahedral grid for numerical integration of the shallow-water equations on the sphere

    SciTech Connect

    Chern, I-Liang

    1994-08-01

    Two versions of a control volume method on a symmetrized icosahedral grid are proposed for solving the shallow-water equations on a sphere. One version expresses of the equations in the 3-D Cartersian coordinate system, while the other expresses the equations in the northern/southern polar sterographic coordinate systems. The pole problem is avoided because of these expressions in both versions and the quasi-homogenity of the icosahedral grid. Truncation errors and convergence tests of the numerical gradient and divergent operators associated with this method are studied. A convergence tests of the numerical gradient and divergent operators associated with this method are studied. A convergence test for a steady zonal flow is demonstrated. Several simulations of Rossby-Haurwitz waves with various numbers are also performed.

  5. Identifying beach sand sources and pathways in the San Francisco Bay Coastal System through the integration of bed characteristics, geochemical tracers, current measurements, and numerical modeling

    NASA Astrophysics Data System (ADS)

    Barnard, P.; Foxgrover, A. C.; Elias, E.; Erikson, L. H.; Hein, J. R.; McGann, M. L.; Mizell, K.; Rosenbauer, R. J.; Swarzenski, P. W.; Takesue, R. K.; Wong, F. L.; Woodrow, D. L.

    2012-12-01

    A unique, multi-faceted provenance study was performed to definitively establish the primary sources, sinks, and transport pathways of beach sized-sand in the San Francisco Bay Coastal System. This integrative program is based on comprehensive surficial sediment sampling of the San Francisco Bay Coastal System, including the seabed, bay floor, area beaches, adjacent rock units, and major drainages. Analyses of sample morphometrics and biological composition (e.g., foraminifera) were then integrated with a suite of tracers including 87Sr/86Sr and 143Nd/144Nd isotopes, rare earth elements, semi-quantitative X-ray diffraction mineralogy, and heavy minerals, and with process-based numerical modeling, in situ current measurements, and bedform asymmetry, to robustly determine the provenance of beach-sized sand in the region. Cross-validating geochemical analyses, numerical modeling, physical process measurements, and proxy-based techniques (e.g., bedform asymmetry, grain size morphometrics) is proved an effective technique for confidently defining sources, pathways, and sinks of sand in complex coastal-estuarine systems. The consensus results highlight the regional impact of a sharp reduction in the primary sediment source, the Sierras, to the San Francisco Bay Coastal System over the last century in driving erosion of the bay floor, ebb-tidal delta, and the outer coast south of the Golden Gate.A) Calculated transport directions based on the integration of the provenance techniques. B) Number of techniques applied for each grid cell to determine the final transport directions.

  6. Path-Integral Renormalization Group Method for Numerical Study on Ground States of Strongly Correlated Electronic Systems

    NASA Astrophysics Data System (ADS)

    Kashima, Tsuyoshi; Imada, Masatoshi

    2001-08-01

    A new efficient numerical algorithm for interacting fermion systems is proposed and examined in detail. The ground state is expressed approximately by a linear combination of numerically chosen basis states in a truncated Hilbert space. Two procedures lead to a better approximation. The first is a numerical renormalization, which optimizes the chosen basis and projects onto the ground state within the fixed dimension, L, of the Hilbert space. The second is an increase of the dimension of the truncated Hilbert space, which enables the linear combination to converge to a better approximation. The extrapolation L→∞ after the convergence removes the approximation error systematically. This algorithm does not suffer from the negative sign problem and can be applied to systems in any spatial dimension and arbitrary lattice structure. The efficiency is tested and the implementation explained for two-dimensional Hubbard models where Slater determinants are employed as chosen basis. Our results with less than 400 chosen basis indicate good accuracy within the errorbar of the best available results as those of the quantum Monte Carlo for energy and other physical quantities.

  7. Numerical integration of gravitational field for general three-dimensional objects and its application to gravitational study of grand design spiral arm structure

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2016-08-01

    We present a method to integrate the gravitational field for general three-dimensional objects. By adopting the spherical polar coordinates centered at the evaluation point as the integration variables, we numerically compute the volume integral representation of the gravitational potential and of the acceleration vector. The variable transformation completely removes the algebraic singularities of the original integrals. The comparison with exact solutions reveals around 15 digits accuracy of the new method. Meanwhile, the 6 digit accuracy of the integrated gravitational field is realized by around 106 evaluations of the integrand per evaluation point, which costs at most a few seconds at a PC with Intel Core i7-4600U CPU running at 2.10 GHz clock. By using the new method, we show the gravitational field of a grand design spiral arm structure as an example. The computed gravitational field shows not only spiral shaped details but also a global feature composed of a thick oblate spheroid and a thin disc. The developed method is directly applicable to the electromagnetic field computation by means of Coulomb's law, the Biot-Savart law, and their retarded extensions. Sample FORTRAN 90 programs and test results are electronically available.

  8. Numerical computation of complex multi-body Navier-Stokes flows with applications for the integrated Space Shuttle launch vehicle

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    1993-01-01

    An enhanced grid system for the Space Shuttle Orbiter was built by integrating CAD definitions from several sources and then generating the surface and volume grids. The new grid system contains geometric components not modeled previously plus significant enhancements on geometry that has been modeled in the old grid system. The new orbiter grids were then integrated with new grids for the rest of the launch vehicle. Enhancements were made to the hyperbolic grid generator HYPGEN and new tools for grid projection, manipulation, and modification, Cartesian box grid and far field grid generation and post-processing of flow solver data were developed.

  9. Numerical Asymptotic Solutions Of Differential Equations

    NASA Technical Reports Server (NTRS)

    Thurston, Gaylen A.

    1992-01-01

    Numerical algorithms derived and compared with classical analytical methods. In method, expansions replaced with integrals evaluated numerically. Resulting numerical solutions retain linear independence, main advantage of asymptotic solutions.

  10. A numerical method for integrating the kinetic equations of droplet spectra evolution by condensation/evaporation and by coalescence/breakup processes

    NASA Technical Reports Server (NTRS)

    Emukashvily, I. M.

    1982-01-01

    An extension of the method of moments is developed for the numerical integration of the kinetic equations of droplet spectra evolution by condensation/evaporation and by coalescence/breakup processes. The number density function n sub k (x,t) in each separate droplet packet between droplet mass grid points (x sub k, x sub k+1) is represented by an expansion in orthogonal polynomials with a given weighting function. In this way droplet number concentrations, liquid water contents and other moments in each droplet packet are conserved and the problem of solving the kinetic equations is replaced by one of solving a set of coupled differential equations for the number density function moments. The method is tested against analytic solutions of the corresponding kinetic equations. Numerical results are obtained for different coalescence/breakup and condensation/evaporation kernels and for different initial droplet spectra. Also droplet mass grid intervals, weighting functions, and time steps are varied.

  11. Solar Radiation and the UV Index: An Application of Numerical Integration, Trigonometric Functions, Online Education and the Modelling Process

    ERIC Educational Resources Information Center

    Downs, Nathan; Parisi, Alfio V.; Galligan, Linda; Turner, Joanna; Amar, Abdurazaq; King, Rachel; Ultra, Filipina; Butler, Harry

    2016-01-01

    A short series of practical classroom mathematics activities employing the use of a large and publicly accessible scientific data set are presented for use by students in years 9 and 10. The activities introduce and build understanding of integral calculus and trigonometric functions through the presentation of practical problem solving that…

  12. Numerical study identifying the factors causing the significant underestimation of the specific discharge estimated using the modified integral pumping test method in a laboratory experiment

    NASA Astrophysics Data System (ADS)

    Sun, Kerang

    2015-09-01

    A three-dimensional finite element model is constructed to simulate the experimental conditions presented in a paper published in this journal [Goltz et al., 2009. Validation of two innovative methods to measure contaminant mass flux in groundwater. Journal of Contaminant Hydrology 106 (2009) 51-61] where the modified integral pumping test (MIPT) method was found to significantly underestimate the specific discharge in an artificial aquifer. The numerical model closely replicates the experimental configuration with explicit representation of the pumping well column and skin, allowing for the model to simulate the wellbore flow in the pumping well as an integral part of the porous media flow in the aquifer using the equivalent hydraulic conductivity approach. The equivalent hydraulic conductivity is used to account for head losses due to friction within the wellbore of the pumping well. Applying the MIPT method on the model simulated piezometric heads resulted in a specific discharge that underestimates the true specific discharge in the experimental aquifer by 18.8%, compared with the 57% underestimation of mass flux by the experiment reported by Goltz et al. (2009). Alternative simulation shows that the numerical model is capable of approximately replicating the experiment results when the equivalent hydraulic conductivity is reduced by an order of magnitude, suggesting that the accuracy of the MIPT estimation could be improved by expanding the physical meaning of the equivalent hydraulic conductivity to account for other factors such as orifice losses in addition to frictional losses within the wellbore. Numerical experiments also show that when applying the MIPT method to estimate hydraulic parameters, use of depth-integrated piezometric head instead of the head near the pump intake can reduce the estimation error resulting from well losses, but not the error associated with the well not being fully screened.

  13. Numerical study identifying the factors causing the significant underestimation of the specific discharge estimated using the modified integral pumping test method in a laboratory experiment.

    PubMed

    Sun, Kerang

    2015-09-01

    A three-dimensional finite element model is constructed to simulate the experimental conditions presented in a paper published in this journal [Goltz et al., 2009. Validation of two innovative methods to measure contaminant mass flux in groundwater. Journal of Contaminant Hydrology 106 (2009) 51-61] where the modified integral pumping test (MIPT) method was found to significantly underestimate the specific discharge in an artificial aquifer. The numerical model closely replicates the experimental configuration with explicit representation of the pumping well column and skin, allowing for the model to simulate the wellbore flow in the pumping well as an integral part of the porous media flow in the aquifer using the equivalent hydraulic conductivity approach. The equivalent hydraulic conductivity is used to account for head losses due to friction within the wellbore of the pumping well. Applying the MIPT method on the model simulated piezometric heads resulted in a specific discharge that underestimates the true specific discharge in the experimental aquifer by 18.8%, compared with the 57% underestimation of mass flux by the experiment reported by Goltz et al. (2009). Alternative simulation shows that the numerical model is capable of approximately replicating the experiment results when the equivalent hydraulic conductivity is reduced by an order of magnitude, suggesting that the accuracy of the MIPT estimation could be improved by expanding the physical meaning of the equivalent hydraulic conductivity to account for other factors such as orifice losses in addition to frictional losses within the wellbore. Numerical experiments also show that when applying the MIPT method to estimate hydraulic parameters, use of depth-integrated piezometric head instead of the head near the pump intake can reduce the estimation error resulting from well losses, but not the error associated with the well not being fully screened. PMID:26210034

  14. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  15. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  16. Definition of a geometric model for landslide numerical modeling from the integration of multi-source geophysical data.

    NASA Astrophysics Data System (ADS)

    Gance, Julien; Bernardie, Séverine; Grandjean, Gilles; Malet, Jean-Philippe

    2014-05-01

    Landslide hazard can be assessed through numerical hydro-mechanical models. These methods require different input data such as a geometric model, rheological constitutive laws and associated hydro-mechanical parameters, and boundary conditions. The objective of this study is to fill the gap existing between geophysical and engineering communities. This gap prevents the engineering community to use the full information available in geophysical imagery. A landslide geometrical model contains information on the geometry and extent of the different geotechnical units of the landslide, and describes the layering and the discontinuities. It is generally drawn from punctual geotechnical tests, using interpolation, or better, from the combined use of a geotechnical test and the iso-value of geophysical tomographies. In this context, we propose to use a multi-source geophysical data fusion strategy as an aid for the construction of landslide geometric models. Based on a fuzzy logic data fusion method, we propose to use different geophysical tomographies and their associated uncertainty and sensitivity tomograms to design a "probable" geometric model. This strategy is tested on a profile of the Super-Sauze landslide using P-wave velocity, P-wave attenuation and electrical resistivity tomography. We construct a probable model and a true model for numerical modeling. Using basic elastic constitutive laws, we show that the model geometry is sufficiently detailed to simulate the complex surface displacements pattern.

  17. An accurate quadrature technique for the contact boundary in 3D finite element computations

    NASA Astrophysics Data System (ADS)

    Duong, Thang X.; Sauer, Roger A.

    2015-01-01

    This paper presents a new numerical integration technique for 3D contact finite element implementations, focusing on a remedy for the inaccurate integration due to discontinuities at the boundary of contact surfaces. The method is based on the adaptive refinement of the integration domain along the boundary of the contact surface, and is accordingly denoted RBQ for refined boundary quadrature. It can be used for common element types of any order, e.g. Lagrange, NURBS, or T-Spline elements. In terms of both computational speed and accuracy, RBQ exhibits great advantages over a naive increase of the number of quadrature points. Also, the RBQ method is shown to remain accurate for large deformations. Furthermore, since the sharp boundary of the contact surface is determined, it can be used for various purposes like the accurate post-processing of the contact pressure. Several examples are presented to illustrate the new technique.

  18. Integration of bed characteristics, geochemical tracers, current measurements, and numerical modeling for assessing the provenance of beach sand in the San Francisco Bay coastal system

    USGS Publications Warehouse

    Barnard, Patrick L.; Foxgrover, Amy C.; Elias, Edwin P.L.; Erikson, Li H.; Hein, James R.; McGann, Mary; Mizell, Kira; Rosenbauer, Robert J.; Swarzenski, Peter W.; Takesue, Renee K.; Wong, Florence L.; Woodrow, Donald L.

    2013-01-01

    Over 150 million m3 of sand-sized sediment has disappeared from the central region of the San Francisco Bay Coastal System during the last half century. This enormous loss may reflect numerous anthropogenic influences, such as watershed damming, bay-fill development, aggregate mining, and dredging. The reduction in Bay sediment also appears to be linked to a reduction in sediment supply and recent widespread erosion of adjacent beaches, wetlands, and submarine environments. A unique, multi-faceted provenance study was performed to definitively establish the primary sources, sinks, and transport pathways of beach-sized sand in the region, thereby identifying the activities and processes that directly limit supply to the outer coast. This integrative program is based on comprehensive surficial sediment sampling of the San Francisco Bay Coastal System, including the seabed, Bay floor, area beaches, adjacent rock units, and major drainages. Analyses of sample morphometrics and biological composition (e.g., Foraminifera) were then integrated with a suite of tracers including 87Sr/86Sr and 143Nd/144Nd isotopes, rare earth elements, semi-quantitative X-ray diffraction mineralogy, and heavy minerals, and with process-based numerical modeling, in situ current measurements, and bedform asymmetry to robustly determine the provenance of beach-sized sand in the region.

  19. Integration of bed characteristics, geochemical tracers, current measurements, and numerical modeling for assessing the provenance of beach sand in the San Francisco Bay Coastal System

    USGS Publications Warehouse

    Barnard, Patrick L.; Foxgrover, Amy; Elias, Edwin P.L.; Erikson, Li H.; Hein, James; McGann, Mary; Mizell, Kira; Rosenbauer, Robert J.; Swarzenski, Peter W.; Takesue, Renee K.; Wong, Florence L.; Woodrow, Don

    2013-01-01

    Over 150 million m3 of sand-sized sediment has disappeared from the central region of the San Francisco Bay Coastal System during the last half century. This enormous loss may reflect numerous anthropogenic influences, such as watershed damming, bay-fill development, aggregate mining, and dredging. The reduction in Bay sediment also appears to be linked to a reduction in sediment supply and recent widespread erosion of adjacent beaches, wetlands, and submarine environments. A unique, multi-faceted provenance study was performed to definitively establish the primary sources, sinks, and transport pathways of beach-sized sand in the region, thereby identifying the activities and processes that directly limit supply to the outer coast. This integrative program is based on comprehensive surficial sediment sampling of the San Francisco Bay Coastal System, including the seabed, Bay floor, area beaches, adjacent rock units, and major drainages. Analyses of sample morphometrics and biological composition (e.g., Foraminifera) were then integrated with a suite of tracers including 87Sr/86Sr and 143Nd/144Nd isotopes, rare earth elements, semi-quantitative X-ray diffraction mineralogy, and heavy minerals, and with process-based numerical modeling, in situ current measurements, and bedform asymmetry to robustly determine the provenance of beach-sized sand in the region.

  20. Volume calculation of subsurface structures and traps in hydrocarbon exploration — a comparison between numerical integration and cell based models

    NASA Astrophysics Data System (ADS)

    Slavinić, Petra; Cvetković, Marko

    2016-01-01

    The volume calculation of geological structures is one of the primary goals of interest when dealing with exploration or production of oil and gas in general. Most of those calculations are done using advanced software packages but still the mathematical workflow (equations) has to be used and understood for the initial volume calculation process. In this paper a comparison is given between bulk volume calculations of geological structures using trapezoidal and Simpson's rule and the ones obtained from cell-based models. Comparison in calculation is illustrated with four models; dome - 1/2 of ball/sphere, elongated anticline, stratigraphic trap due to lateral facies change and faulted anticline trap. Results show that Simpson's and trapezoidal rules give a very accurate volume calculation even with a few inputs(isopach areas - ordinates). A test of cell based model volume calculation precision against grid resolution is presented for various cases. For high accuracy, less the 1% of an error from coarsening, a cell area has to be 0.0008% of the reservoir area

  1. Performance of heterogeneous computing with graphics processing unit and many integrated core for hartree potential calculations on a numerical grid.

    PubMed

    Choi, Sunghwan; Kwon, Oh-Kyoung; Kim, Jaewook; Kim, Woo Youn

    2016-09-15

    We investigated the performance of heterogeneous computing with graphics processing units (GPUs) and many integrated core (MIC) with 20 CPU cores (20×CPU). As a practical example toward large scale electronic structure calculations using grid-based methods, we evaluated the Hartree potentials of silver nanoparticles with various sizes (3.1, 3.7, 4.9, 6.1, and 6.9 nm) via a direct integral method supported by the sinc basis set. The so-called work stealing scheduler was used for efficient heterogeneous computing via the balanced dynamic distribution of workloads between all processors on a given architecture without any prior information on their individual performances. 20×CPU + 1GPU was up to ∼1.5 and ∼3.1 times faster than 1GPU and 20×CPU, respectively. 20×CPU + 2GPU was ∼4.3 times faster than 20×CPU. The performance enhancement by CPU + MIC was considerably lower than expected because of the large initialization overhead of MIC, although its theoretical performance is similar with that of CPU + GPU. © 2016 Wiley Periodicals, Inc. PMID:27431905

  2. Towards an integrated numerical simulator for crack-seal vein microstructure: Coupling phase-field with the Discrete Element Method

    NASA Astrophysics Data System (ADS)

    Virgo, Simon; Ankit, Kumar; Nestler, Britta; Urai, Janos L.

    2016-04-01

    Crack-seal veins form in a complex interplay of coupled thermal, hydraulic, mechanical and chemical processes. Their formation and cyclic growth involves brittle fracturing and dilatancy, phases of increased fluid flow and the growth of crystals that fill the voids and reestablish the mechanical strength. Existing numerical models of vein formation focus on selected aspects of the coupled process. Until today, no model exists that is able to use a realistic representation of the fracturing AND sealing processes, simultaneously. To address this challenge, we propose the bidirectional coupling of two numerical methods that have proven themselves as very powerful to model the fundamental processes acting in crack-seal systems: Phase-field and the Discrete Element Method (DEM). The phase-field Method was recently successfully extended to model the precipitation of quartz crystals from an aqueous solution and applied to model the sealing of a vein over multiple opening events (Ankit et al., 2013; Ankit et al., 2015a; Ankit et al., 2015b). The advantage over former, purely kinematic approaches is that in phase-field, the crystal growth is modeled based on thermodynamic and kinetic principles. Different driving forces for microstructure evolution, such as chemical bulk free energy, interfacial energy, elastic strain energy and different transport processes, such as mass diffusion and advection, can be coupled and the effect on the evolution process can be studied in 3D. The Discrete Element Method was already used in several studies to model the fracturing of rocks and the incremental growth of veins by repeated fracturing (Virgo et al., 2013; Virgo et al., 2014). Materials in DEM are represented by volumes of packed spherical particles and the response to the material to stress is modeled by interaction of the particles with their nearest neighbours. For rocks, in 3D, the method provides a realistic brittle failure behaviour. Exchange Routines are being developed that

  3. Predictive Modeling of Chemical Hazard by Integrating Numerical Descriptors of Chemical Structures and Short-term Toxicity Assay Data

    PubMed Central

    Rusyn, Ivan; Sedykh, Alexander; Guyton, Kathryn Z.; Tropsha, Alexander

    2012-01-01

    Quantitative structure-activity relationship (QSAR) models are widely used for in silico prediction of in vivo toxicity of drug candidates or environmental chemicals, adding value to candidate selection in drug development or in a search for less hazardous and more sustainable alternatives for chemicals in commerce. The development of traditional QSAR models is enabled by numerical descriptors representing the inherent chemical properties that can be easily defined for any number of molecules; however, traditional QSAR models often have limited predictive power due to the lack of data and complexity of in vivo endpoints. Although it has been indeed difficult to obtain experimentally derived toxicity data on a large number of chemicals in the past, the results of quantitative in vitro screening of thousands of environmental chemicals in hundreds of experimental systems are now available and continue to accumulate. In addition, publicly accessible toxicogenomics data collected on hundreds of chemicals provide another dimension of molecular information that is potentially useful for predictive toxicity modeling. These new characteristics of molecular bioactivity arising from short-term biological assays, i.e., in vitro screening and/or in vivo toxicogenomics data can now be exploited in combination with chemical structural information to generate hybrid QSAR–like quantitative models to predict human toxicity and carcinogenicity. Using several case studies, we illustrate the benefits of a hybrid modeling approach, namely improvements in the accuracy of models, enhanced interpretation of the most predictive features, and expanded applicability domain for wider chemical space coverage. PMID:22387746

  4. Accurate measurement of time

    NASA Astrophysics Data System (ADS)

    Itano, Wayne M.; Ramsey, Norman F.

    1993-07-01

    The paper discusses current methods for accurate measurements of time by conventional atomic clocks, with particular attention given to the principles of operation of atomic-beam frequency standards, atomic hydrogen masers, and atomic fountain and to the potential use of strings of trapped mercury ions as a time device more stable than conventional atomic clocks. The areas of application of the ultraprecise and ultrastable time-measuring devices that tax the capacity of modern atomic clocks include radio astronomy and tests of relativity. The paper also discusses practical applications of ultraprecise clocks, such as navigation of space vehicles and pinpointing the exact position of ships and other objects on earth using the GPS.

  5. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  6. Use of integrated analogue and numerical modelling to predict tridimensional fracture intensity in fault-related-folds.

    NASA Astrophysics Data System (ADS)

    Pizzati, Mattia; Cavozzi, Cristian; Magistroni, Corrado; Storti, Fabrizio

    2016-04-01

    Fracture density pattern predictions with low uncertainty is a fundamental issue for constraining fluid flow pathways in thrust-related anticlines in the frontal parts of thrust-and-fold belts and accretionary prisms, which can also provide plays for hydrocarbon exploration and development. Among the drivers that concur to determine the distribution of fractures in fold-and-thrust-belts, the complex kinematic pathways of folded structures play a key role. In areas with scarce and not reliable underground information, analogue modelling can provide effective support for developing and validating reliable hypotheses on structural architectures and their evolution. In this contribution, we propose a working method that combines analogue and numerical modelling. We deformed a sand-silicone multilayer to eventually produce a non-cylindrical thrust-related anticline at the wedge toe, which was our test geological structure at the reservoir scale. We cut 60 serial cross-sections through the central part of the deformed model to analyze faults and folds geometry using dedicated software (3D Move). The cross-sections were also used to reconstruct the 3D geometry of reference surfaces that compose the mechanical stratigraphy thanks to the use of the software GoCad. From the 3D model of the experimental anticline, by using 3D Move it was possible to calculate the cumulative stress and strain underwent by the deformed reference layers at the end of the deformation and also in incremental steps of fold growth. Based on these model outputs it was also possible to predict the orientation of three main fractures sets (joints and conjugate shear fractures) and their occurrence and density on model surfaces. The next step was the upscaling of the fracture network to the entire digital model volume, to create DFNs.

  7. A numerical model of continental-scale topographic evolution integrating thin sheet tectonics, river transport, and orographic precipitation

    NASA Astrophysics Data System (ADS)

    Garcia-Castellanos, Daniel; Jimenez-Munt, Ivone

    2014-05-01

    How much does the erosion and sedimentation at the crust's surface influence on the patterns and distribution of tectonic deformation? This question has been mostly addressed from a numerical modelling perspective, at scales ranging from local to orogenic. Here we present a model that aims at constraining this phenomenon at the continental scale. With this purpose, we couple a thin-sheet viscous model of continental deformation with a stream-power surface transport model. The model also incorporates flexural isostatic compensation that permits the formation of large sedimentary foreland basins and a precipitation model that reproduces basic climatic effects such as continentality and orographic rainfall and rain shadow. We quantify the feedbacks between these 4 processes in a synthetic scenario inspired by the India-Asia collision. The model reproduces first-order characteristics of the growth of the Tibetan Plateau as a result of the Indian indentation. A large intramountain basin (comparable to the Tarim Basin) develops when predefining a hard inherited area in the undeformed foreland (Asia). The amount of sediment trapped in it is very sensitive to climatic parameters, particularly to evaporation, because it crucially determines its endorheic/exorheic drainage. We identify some degree of feedback between the deep and the surface processes occurs, leading locally to a <20% increase in deformation rates if orographic precipitation is account for (relative to a reference model with evenly-distributed precipitation). These enhanced thickening of the crust takes place particularly in areas of concentrated precipitation and steep slope, i.e., at the upwind flank of the growing plateau. This effect is particularly enhanced at the corners of the indenter (syntaxes). We hypothesize that this may provide clues for better understanding the mechanisms underlying the intriguing tectonic aneurisms documented in the syntaxes of the Himalayas.

  8. Numerical calculations of two dimensional, unsteady transonic flows with circulation

    NASA Technical Reports Server (NTRS)

    Beam, R. M.; Warming, R. F.

    1974-01-01

    The feasibility of obtaining two-dimensional, unsteady transonic aerodynamic data by numerically integrating the Euler equations is investigated. An explicit, third-order-accurate, noncentered, finite-difference scheme is used to compute unsteady flows about airfoils. Solutions for lifting and nonlifting airfoils are presented and compared with subsonic linear theory. The applicability and efficiency of the numerical indicial function method are outlined. Numerically computed subsonic and transonic oscillatory aerodynamic coefficients are presented and compared with those obtained from subsonic linear theory and transonic wind-tunnel data.

  9. NMR signal for particles diffusing under potentials: From path integrals and numerical methods to a model of diffusion anisotropy

    NASA Astrophysics Data System (ADS)

    Yolcu, Cem; Memiç, Muhammet; Şimşek, Kadir; Westin, Carl-Fredrik; Özarslan, Evren

    2016-05-01

    We study the influence of diffusion on NMR experiments when the molecules undergo random motion under the influence of a force field and place special emphasis on parabolic (Hookean) potentials. To this end, the problem is studied using path integral methods. Explicit relationships are derived for commonly employed gradient waveforms involving pulsed and oscillating gradients. The Bloch-Torrey equation, describing the temporal evolution of magnetization, is modified by incorporating potentials. A general solution to this equation is obtained for the case of parabolic potential by adopting the multiple correlation function (MCF) formalism, which has been used in the past to quantify the effects of restricted diffusion. Both analytical and MCF results were found to be in agreement with random walk simulations. A multidimensional formulation of the problem is introduced that leads to a new characterization of diffusion anisotropy. Unlike the case of traditional methods that employ a diffusion tensor, anisotropy originates from the tensorial force constant, and bulk diffusivity is retained in the formulation. Our findings suggest that some features of the NMR signal that have traditionally been attributed to restricted diffusion are accommodated by the Hookean model. Under certain conditions, the formalism can be envisioned to provide a viable approximation to the mathematically more challenging restricted diffusion problems.

  10. The astronomical rhythm of Late-Devonian climate change: an integration of cyclostratigraphy and numerical climate modeling

    NASA Astrophysics Data System (ADS)

    De Vleeschouwer, David; Rakocinski, Michal; Racki, Grzegorz; Bond, David; Sobien, Katarzyna; Bounceur, Nabila; Crucifix, Michel; Claeys, Philippe

    2013-04-01

    Rhythmical alternations between limestone and shales or marls characterize the famous Kowala section, Holy Cross Mountains, Poland. Two intervals of this section were studied for evidence of orbital cyclostratigraphy. The oldest interval spans the Frasnian - Famennian (Late Devonian) boundary, deposited under one of the hottest greenhouse climates of the Phanerozoic. The youngest interval encompasses the Devonian - Carboniferous (D-C) boundary, a pivotal moment in Earth's climatic history that saw a transition from greenhouse to icehouse. In both intervals, a clear eccentricity imprint can be distinguished. However, in this abstract, we will focus on the Famennian - Tournaisian (D-C) interval. This interval reveals eccentricity and precession-related lithological variations. Precession-related alternations clearly demonstrate grouping into 100-kyr bundles. The Famennian part of this interval is characterized by several distinctive anoxic black shales, including the Annulata, Dasberg and Hangenberg shales. Our high-resolution cyclostratigraphic framework indicates that those shales were deposited at 2.2 and 2.4 Myr intervals respectively. These durations strongly suggest a link between the long-period (~2.4 Myr) eccentricity cycle and the development of the Annulata, Dasberg and Hangenberg anoxic shales. It is assumed that these black shales form under transgressive conditions, when extremely high eccentricity promoted the collapse of small continental ice-sheets at the most austral latitudes of western Gondwana. Indeed, numerical GCM modeling (HadSM3) of the Late Devonian climate, suggests that rapid melting and ice sheet collapse is triggered during maximal austral summer insolation when eccentricity is high and the perihelion is reached in December. Under this particular astronomical configuration, the global climate is optimal and thus sea-levels are high. Moreover, the global hydrological cycle is enhanced, allowing for more intense rainfall and monsoonal

  11. Numerical solution of a diffusion problem by exponentially fitted finite difference methods.

    PubMed

    D'Ambrosio, Raffaele; Paternoster, Beatrice

    2014-01-01

    This paper is focused on the accurate and efficient solution of partial differential differential equations modelling a diffusion problem by means of exponentially fitted finite difference numerical methods. After constructing and analysing special purpose finite differences for the approximation of second order partial derivatives, we employed them in the numerical solution of a diffusion equation with mixed boundary conditions. Numerical experiments reveal that a special purpose integration, both in space and in time, is more accurate and efficient than that gained by employing a general purpose solver. PMID:26034665

  12. Improved Integral Equation Solution for the First Passage Time of Leaky Integrate-and-Fire Neurons

    PubMed Central

    Dong, `Yi; Mihalas, Stefan; Niebur, Ernst

    2011-01-01

    An accurate calculation of the first passage time probability density (FPTPD) is essential for computing the likelihood of solutions of the stochastic leaky integrate-and-fire model. The previously proposed numerical calculation of the FPTPD based on the integral equation method discretizes the probability current of the voltage crossing the threshold. While the method is accurate for high noise levels, we show that it results in large numerical errors for small noise. The problem is solved by analytically computing, in each time bin, the mean probability current. Efficiency is further improved by identifying and ignoring time bins with negligible mean probability current. PMID:21105825

  13. Numerical analysis and synthesis of 2D quasi-optical reflectors and beam waveguides based on an integral-equation approach with Nystrom's discretization.

    PubMed

    Nosich, Andrey A; Gandel, Yuriy V; Magath, Thore; Altintas, Ayhan

    2007-09-01

    Considered is the beam wave guidance and scattering by 2D quasi-optical reflectors modeling the components of beam waveguides. The incident field is taken as the complex-source-point field to simulate a finite-width beam generated by a small-aperture source. A numerical solution is obtained from the coupled singular integral equations (SIEs) for the surface currents on reflectors, discretized by using the recently introduced Nystrom-type quadrature formulas. This analysis is applied to study what effect the edge illumination has on the performance of a chain of confocal elliptic reflectors. We also develop a semianalytical approach for shaped reflector synthesis after a prescribed near-field pattern. Here a new point is the use of auxiliary SIEs of the same type as in the scattering analysis problem, however, for the gradient of the objective function. Sample results are presented for the synthesis of a reflector-type beam splitter. PMID:17767252

  14. Integrated Water Flow Model (IWFM), A Tool For Numerically Simulating Linked Groundwater, Surface Water And Land-Surface Hydrologic Processes

    NASA Astrophysics Data System (ADS)

    Dogrul, E. C.; Brush, C. F.; Kadir, T. N.

    2006-12-01

    The Integrated Water Flow Model (IWFM) is a comprehensive input-driven application for simulating groundwater flow, surface water flow and land-surface hydrologic processes, and interactions between these processes, developed by the California Department of Water Resources (DWR). IWFM couples a 3-D finite element groundwater flow process and 1-D land surface, lake, stream flow and vertical unsaturated-zone flow processes which are solved simultaneously at each time step. The groundwater flow system is simulated as a multilayer aquifer system with a mixture of confined and unconfined aquifers separated by semiconfining layers. The groundwater flow process can simulate changing aquifer conditions (confined to unconfined and vice versa), subsidence, tile drains, injection wells and pumping wells. The land surface process calculates elemental water budgets for agricultural, urban, riparian and native vegetation classes. Crop water demands are dynamically calculated using distributed soil properties, land use and crop data, and precipitation and evapotranspiration rates. The crop mix can also be automatically modified as a function of pumping lift using logit functions. Surface water diversions and groundwater pumping can each be specified, or can be automatically adjusted at run time to balance water supply with water demand. The land-surface process also routes runoff to streams and deep percolation to the unsaturated zone. Surface water networks are specified as a series of stream nodes (coincident with groundwater nodes) with specified bed elevation, conductance and stage-flow relationships. Stream nodes are linked to form stream reaches. Stream inflows at the model boundary, surface water diversion locations, and one or more surface water deliveries per location are specified. IWFM routes stream flows through the network, calculating groundwater-surface water interactions, accumulating inflows from runoff, and allocating available stream flows to meet specified or

  15. Important Nearby Galaxies without Accurate Distances

    NASA Astrophysics Data System (ADS)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  16. The Cenozoic fold-and-thrust belt of Eastern Sardinia: Evidences from the integration of field data with numerically balanced geological cross section

    NASA Astrophysics Data System (ADS)

    Arragoni, S.; Maggi, M.; Cianfarra, P.; Salvini, F.

    2016-06-01

    Newly collected structural data in Eastern Sardinia (Italy) integrated with numerical techniques led to the reconstruction of a 2-D admissible and balanced model revealing the presence of a widespread Cenozoic fold-and-thrust belt. The model was achieved with the FORC software, obtaining a 3-D (2-D + time) numerical reconstruction of the continuous evolution of the structure through time. The Mesozoic carbonate units of Eastern Sardinia and their basement present a fold-and-thrust tectonic setting, with a westward direction of tectonic transport (referred to the present-day coordinates). The tectonic style of the upper levels is thin skinned, with flat sectors prevailing over ramps and younger-on-older thrusts. Three regional tectonic units are present, bounded by two regional thrusts. Strike-slip faults overprint the fold-and-thrust belt and developed during the Sardinia-Corsica Block rotation along the strike of the preexisting fault ramps, not affecting the numerical section balancing. This fold-and-thrust belt represents the southward prosecution of the Alpine Corsica collisional chain and the missing link between the Alpine Chain and the Calabria-Peloritani Block. Relative ages relate its evolution to the meso-Alpine event (Eocene-Oligocene times), prior to the opening of the Tyrrhenian Sea (Tortonian). Results fill a gap of information about the geodynamic evolution of the European margin in Central Mediterranean, between Corsica and the Calabria-Peloritani Block, and imply the presence of remnants of this double-verging belt, missing in the Southern Tyrrhenian basin, within the Southern Apennine chain. The used methodology proved effective for constraining balanced cross sections also for areas lacking exposures of the large-scale structures, as the case of Eastern Sardinia.

  17. Numerical valuation of discrete double barrier options

    NASA Astrophysics Data System (ADS)

    Milev, Mariyan; Tagliani, Aldo

    2010-03-01

    In the present paper we explore the problem for pricing discrete barrier options utilizing the Black-Scholes model for the random movement of the asset price. We postulate the problem as a path integral calculation by choosing approach that is similar to the quadrature method. Thus, the problem is reduced to the estimation of a multi-dimensional integral whose dimension corresponds to the number of the monitoring dates. We propose a fast and accurate numerical algorithm for its valuation. Our results for pricing discretely monitored one and double barrier options are in agreement with those obtained by other numerical and analytical methods in Finance and literature. A desired level of accuracy is very fast achieved for values of the underlying asset close to the strike price or the barriers. The method has a simple computer implementation and it permits observing the entire life of the option.

  18. On Numerical Methods For Hypersonic Turbulent Flows

    NASA Astrophysics Data System (ADS)

    Yee, H. C.; Sjogreen, B.; Shu, C. W.; Wang, W.; Magin, T.; Hadjadj, A.

    2011-05-01

    Proper control of numerical dissipation in numerical methods beyond the standard shock-capturing dissipation at discontinuities is an essential element for accurate and stable simulation of hypersonic turbulent flows, including combustion, and thermal and chemical nonequilibrium flows. Unlike rapidly developing shock interaction flows, turbulence computations involve long time integrations. Improper control of numerical dissipation from one time step to another would be compounded over time, resulting in the smearing of turbulent fluctuations to an unrecognizable form. Hypersonic turbulent flows around re- entry space vehicles involve mixed steady strong shocks and turbulence with unsteady shocklets that pose added computational challenges. Stiffness of the source terms and material mixing in combustion pose yet other types of numerical challenges. A low dissipative high order well- balanced scheme, which can preserve certain non-trivial steady solutions of the governing equations exactly, may help minimize some of these difficulties. For stiff reactions it is well known that the wrong propagation speed of discontinuities occurs due to the under-resolved numerical solutions in both space and time. Schemes to improve the wrong propagation speed of discontinuities for systems of stiff reacting flows remain a challenge for algorithm development. Some of the recent algorithm developments for direct numerical simulations (DNS) and large eddy simulations (LES) for the subject physics, including the aforementioned numerical challenges, will be discussed.

  19. An accurate and efficient method for prediction of the long-term evolution of space debris in the geosynchronous region

    NASA Astrophysics Data System (ADS)

    McNamara, Roger P.; Eagle, C. D.

    1992-08-01

    Planetary Observer High Accuracy Orbit Prediction Program (POHOP), an existing numerical integrator, was modified with the solar and lunar formulae developed by T.C. Van Flandern and K.F. Pulkkinen to provide the accuracy required to evaluate long-term orbit characteristics of objects on the geosynchronous region. The orbit of a 1000 kg class spacecraft is numerically integrated over 50 years using both the original and the more accurate solar and lunar ephemerides methods. Results of this study demonstrate that, over the long term, for an object located in the geosynchronous region, the more accurate solar and lunar ephemerides effects on the objects's position are significantly different than using the current POHOP ephemeris.

  20. Assessment of vulnerability in karst aquifers using a quantitative integrated numerical model: catchment characterization and high resolution monitoring - Application to semi-arid regions- Lebanon.

    NASA Astrophysics Data System (ADS)

    Doummar, Joanna; Aoun, Michel; Andari, Fouad

    2016-04-01

    Karst aquifers are highly heterogeneous and characterized by a duality of recharge (concentrated; fast versus diffuse; slow) and a duality of flow which directly influences groundwater flow and spring responses. Given this heterogeneity in flow and infiltration, karst aquifers do not always obey standard hydraulic laws. Therefore the assessment of their vulnerability reveals to be challenging. Studies have shown that vulnerability of aquifers is highly governed by recharge to groundwater. On the other hand specific parameters appear to play a major role in the spatial and temporal distribution of infiltration on a karst system, thus greatly influencing the discharge rates observed at a karst spring, and consequently the vulnerability of a spring. This heterogeneity can only be depicted using an integrated numerical model to quantify recharge spatially and assess the spatial and temporal vulnerability of a catchment for contamination. In the framework of a three-year PEER NSF/USAID funded project, the vulnerability of a karst catchment in Lebanon is assessed quantitatively using a numerical approach. The aim of the project is also to refine actual evapotranspiration rates and spatial recharge distribution in a semi arid environment. For this purpose, a monitoring network was installed since July 2014 on two different pilot karst catchment (drained by Qachqouch Spring and Assal Spring) to collect high resolution data to be used in an integrated catchment numerical model with MIKE SHE, DHI including climate, unsaturated zone, and saturated zone. Catchment characterization essential for the model included geological mapping and karst features (e.g., dolines) survey as they contribute to fast flow. Tracer experiments were performed under different flow conditions (snow melt and low flow) to delineate the catchment area, reveal groundwater velocities and response to snowmelt events. An assessment of spring response after precipitation events allowed the estimation of the

  1. Estimation of Geologic Storage Capacity of Carbon Dioxide in the Bukpyeong Basin, Korea Using Integrated Three-Dimensional Geologic Formation Modeling and Thermo-Hydrological Numerical Modeling

    NASA Astrophysics Data System (ADS)

    Kim, J.; Kihm, J.; Park, S.; SNU CO2 GEO-SEQ TEAM

    2011-12-01

    A conventional method, which was suggested by NETL (2007), has been widely used for estimating the geologic storage capacity of carbon dioxide in sedimentary basins. Because of its simple procedure, it has been straightforwardly applied to even spatially very complicate sedimentary basins. Thus, the results from the conventional method are often not accurate and reliable because it can not consider spatial distributions of fluid conditions and carbon dioxide properties, which are not uniform but variable within sedimentary basins. To overcome this limit of the conventional method, a new method, which can consider such spatially variable distributions of fluid conditions and carbon dioxide properties within sedimentary basins, is suggested and applied in this study. In this new method, a three-dimensional geologic formation model of a target sedimentary basin is first established and discretized into volume elements. The fluid conditions (i.e., pressure, temperature, and salt concentration) within each element are then obtained by performing thermo-hydrological numerical modeling. The carbon dioxide properties (i.e., phase, density, dynamic viscosity, and solubility to groundwater) within each element are then calculated from thermodynamic database under corresponding fluid conditions. Finally, the geologic storage capacity of carbon dioxide with in each element is estimated using the corresponding carbon dioxide properties as well as porosity and element volume, and that within the whole sedimentary basin is determined by summation over all elements. This new method is applied to the Bukpyeong Basin, which is one of the prospective offshore sedimentary basins for geologic storage of carbon dioxide in Korea. A three-dimensional geologic formation model of the Bukpyeong Basin is first established considering the elevation data of the boundaries between the geologic formations obtained from seismic survey and geologic maps at the sea floor surface. This geologic

  2. Accurate deterministic solutions for the classic Boltzmann shock profile

    NASA Astrophysics Data System (ADS)

    Yue, Yubei

    The Boltzmann equation or Boltzmann transport equation is a classical kinetic equation devised by Ludwig Boltzmann in 1872. It is regarded as a fundamental law in rarefied gas dynamics. Rather than using macroscopic quantities such as density, temperature, and pressure to describe the underlying physics, the Boltzmann equation uses a distribution function in phase space to describe the physical system, and all the macroscopic quantities are weighted averages of the distribution function. The information contained in the Boltzmann equation is surprisingly rich, and the Euler and Navier-Stokes equations of fluid dynamics can be derived from it using series expansions. Moreover, the Boltzmann equation can reach regimes far from the capabilities of fluid dynamical equations, such as the realm of rarefied gases---the topic of this thesis. Although the Boltzmann equation is very powerful, it is extremely difficult to solve in most situations. Thus the only hope is to solve it numerically. But soon one finds that even a numerical simulation of the equation is extremely difficult, due to both the complex and high-dimensional integral in the collision operator, and the hyperbolic phase-space advection terms. For this reason, until few years ago most numerical simulations had to rely on Monte Carlo techniques. In this thesis I will present a new and robust numerical scheme to compute direct deterministic solutions of the Boltzmann equation, and I will use it to explore some classical gas-dynamical problems. In particular, I will study in detail one of the most famous and intrinsically nonlinear problems in rarefied gas dynamics, namely the accurate determination of the Boltzmann shock profile for a gas of hard spheres.

  3. Kinetics of batch anaerobic co-digestion of poultry litter and wheat straw including a novel strategy of estimation of endogenous decay and yield coefficients using numerical integration.

    PubMed

    Shen, Jiacheng; Zhu, Jun

    2016-10-01

    The kinetics of anaerobic co-digestion of poultry litter and wheat straw has not been widely reported in the literature. Since endogenous decay and yield coefficients are two basic parameters for the design of anaerobic digesters, they are currently estimated only by continues experiments. In this study, numerical integration was employed to develop a novel strategy to estimate endogenous decay and yield coefficients using initial and final liquid data combined with methane volumes produced over time in batch experiments. To verify this method, the kinetics of batch anaerobic co-digestion of poultry litter and wheat straw at different TS and VS levels was investigated, with the corresponding endogenous decay and (non-observed) yield coefficients in the exponential periods determined to be between 0.74 × 10(-3) and 6.1 × 10(-3) d(-1), and between 0.0259 and 0.108 g VSS (g VS)(-1), respectively. A general Gompertz model developed early for bio-product could be used to simulate the methane volume profile in the co-digestion. The same model parameters obtained from the methane model combined with the corresponding yield coefficients could also be used to describe the VSS generation and VS destruction. PMID:27234662

  4. A new numerical approach to solve Thomas-Fermi model of an atom using bio-inspired heuristics integrated with sequential quadratic programming.

    PubMed

    Raja, Muhammad Asif Zahoor; Zameer, Aneela; Khan, Aziz Ullah; Wazwaz, Abdul Majid

    2016-01-01

    In this study, a novel bio-inspired computing approach is developed to analyze the dynamics of nonlinear singular Thomas-Fermi equation (TFE) arising in potential and charge density models of an atom by exploiting the strength of finite difference scheme (FDS) for discretization and optimization through genetic algorithms (GAs) hybrid with sequential quadratic programming. The FDS procedures are used to transform the TFE differential equations into a system of nonlinear equations. A fitness function is constructed based on the residual error of constituent equations in the mean square sense and is formulated as the minimization problem. Optimization of parameters for the system is carried out with GAs, used as a tool for viable global search integrated with SQP algorithm for rapid refinement of the results. The design scheme is applied to solve TFE for five different scenarios by taking various step sizes and different input intervals. Comparison of the proposed results with the state of the art numerical and analytical solutions reveals that the worth of our scheme in terms of accuracy and convergence. The reliability and effectiveness of the proposed scheme are validated through consistently getting optimal values of statistical performance indices calculated for a sufficiently large number of independent runs to establish its significance. PMID:27610319

  5. NUMERICAL CALCULATION: ASPIRATION EFFICIENCY OF AEROSOLS INTO THIN-WALLED SAMPLING INLETS

    EPA Science Inventory

    Aspiration efficiency of particles from a flowing airstream into a thin-walled sampling inlet is accurately predicted using a numerical model. he model combines the Boundary Integral Equation Method for predicting the velocity field into the inlet with an analytical solution to t...

  6. Accurate computation of Stokes flow driven by an open immersed interface

    NASA Astrophysics Data System (ADS)

    Li, Yi; Layton, Anita T.

    2012-06-01

    We present numerical methods for computing two-dimensional Stokes flow driven by forces singularly supported along an open, immersed interface. Two second-order accurate methods are developed: one for accurately evaluating boundary integral solutions at a point, and another for computing Stokes solution values on a rectangular mesh. We first describe a method for computing singular or nearly singular integrals, such as a double layer potential due to sources on a curve in the plane, evaluated at a point on or near the curve. To improve accuracy of the numerical quadrature, we add corrections for the errors arising from discretization, which are found by asymptotic analysis. When used to solve the Stokes equations with sources on an open, immersed interface, the method generates second-order approximations, for both the pressure and the velocity, and preserves the jumps in the solutions and their derivatives across the boundary. We then combine the method with a mesh-based solver to yield a hybrid method for computing Stokes solutions at N2 grid points on a rectangular grid. Numerical results are presented which exhibit second-order accuracy. To demonstrate the applicability of the method, we use the method to simulate fluid dynamics induced by the beating motion of a cilium. The method preserves the sharp jumps in the Stokes solution and their derivatives across the immersed boundary. Model results illustrate the distinct hydrodynamic effects generated by the effective stroke and by the recovery stroke of the ciliary beat cycle.

  7. An equivalent domain integral for analysis of two-dimensional mixed mode problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Shivakumar, K. N.

    1989-01-01

    An equivalent domain integral (EDI) method for calculating J-integrals for two-dimensional cracked elastic bodies subjected to mixed mode loading is presented. The total and product integrals consist of the sum of an area or domain integral and line integrals on the crack faces. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented. The procedure that uses the symmetric and antisymmetric components of the stress and displacement fields to calculate the individual modes gave accurate values of the integrals for all the problems analyzed.

  8. Magnitude knowledge: the common core of numerical development.

    PubMed

    Siegler, Robert S

    2016-05-01

    The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: (1) representing increasingly precisely the magnitudes of non-symbolic numbers, (2) connecting small symbolic numbers to their non-symbolic referents, (3) extending understanding from smaller to larger whole numbers, and (4) accurately representing the magnitudes of rational numbers. The present review identifies substantial commonalities, as well as differences, in these four aspects of numerical development. With both whole and rational numbers, numerical magnitude knowledge is concurrently correlated with, longitudinally predictive of, and causally related to multiple aspects of mathematical understanding, including arithmetic and overall math achievement. Moreover, interventions focused on increasing numerical magnitude knowledge often generalize to other aspects of mathematics. The cognitive processes of association and analogy seem to play especially large roles in this development. Thus, acquisition of numerical magnitude knowledge can be seen as the common core of numerical development. PMID:27074723

  9. Root Water Uptake and Tracer Transport in a Lupin Root System: Integration of Magnetic Resonance Images and the Numerical Model RSWMS

    NASA Astrophysics Data System (ADS)

    Pohlmeier, Andreas; Vanderborght, Jan; Haber-Pohlmeier, Sabina; Wienke, Sandra; Vereecken, Harry; Javaux, Mathieu

    2010-05-01

    Combination of experimental studies with detailed deterministic models help understand root water uptake processes. Recently, Javaux et al. developed the RSWMS model by integration of Doussańs root model into the well established SWMS code[1], which simulates water and solute transport in unsaturated soil [2, 3]. In order to confront RSWMS modeling results to experimental data, we used Magnetic Resonance Imaging (MRI) technique to monitor root water uptake in situ. Non-invasive 3-D imaging of root system architecture, water content distributions and tracer transport by MR were performed and compared with numerical model calculations. Two MRI experiments were performed and modeled: i) water uptake during drought stress and ii) transport of a locally injected tracer (Gd-DTPA) to the soil-root system driven by root water uptake. Firstly, the high resolution MRI image (0.23x0.23x0.5mm) of the root system was transferred into a continuous root system skeleton by a combination of thresholding, region-growing filtering and final manual 3D redrawing of the root strands. Secondly, the two experimental scenarios were simulated by RSWMS with a resolution of about 3mm. For scenario i) the numerical simulations could reproduce the general trend that is the strong water depletion from the top layer of the soil. However, the creation of depletion zones in the vicinity of the roots could not be simulated, due to a poor initial evaluation of the soil hydraulic properties, which equilibrates instantaneously larger differences in water content. The determination of unsaturated conductivities at low water content was needed to improve the model calculations. For scenario ii) simulations confirmed the solute transport towards the roots by advection. 1. Simunek, J., T. Vogel, and M.T. van Genuchten, The SWMS_2D Code for Simulating Water Flow and Solute Transport in Two-Dimensional Variably Saturated Media. Version 1.21. 1994, U.S. Salinity Laboratory, USDA, ARS: Riverside, California

  10. Numerical simulation of the flow field around a complete aircraft

    NASA Technical Reports Server (NTRS)

    Shang, J. S.; Scherr, S. J.

    1986-01-01

    The present effort represents a first attempt of numerical simulation of the flow field around a complete aircraft-like, lifting configuration utilizing the Reynolds averaged Navier-Stokes equations. The numerical solution generated for the experimental aircraft concept X24C-10D at a Mach number of 5.95 not only exhibited accurate prediction of detailed flow properties but also of the integrated aerodynamic coefficients. In addition, the present analysis demonstrated that a page structure of data collected into cyclic blocks is an efficient and viable means for processing the Navier-Stokes equations on the CRAY XMP-22 computer with external memory device.

  11. Accurate radiative transfer calculations for layered media.

    PubMed

    Selden, Adrian C

    2016-07-01

    Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics. PMID:27409700

  12. Investigation of Geomorphic and Seismic Effects on the 1959 Madison Canyon, Montana, Landslide Using an Integrated Field, Engineering Geomorphology Mapping, and Numerical Modelling Approach

    NASA Astrophysics Data System (ADS)

    Wolter, A.; Gischig, V.; Stead, D.; Clague, J. J.

    2016-06-01

    We present an integrated approach to investigate the seismically triggered Madison Canyon landslide (volume = 20 Mm3), which killed 26 people in Montana, USA, in 1959. We created engineering geomorphological maps and conducted field surveys, long-range terrestrial digital photogrammetry, and preliminary 2D numerical modelling with the objective of determining the conditioning factors, mechanisms, movement behaviour, and evolution of the failure. We emphasise the importance of both endogenic (i.e. seismic) and exogenic (i.e. geomorphic) processes in conditioning the slope for failure and hypothesise a sequence of events based on the morphology of the deposit and seismic modelling. A section of the slope was slowly deforming before a magnitude-7.5 earthquake with an epicentre 30 km away triggered the catastrophic failure in August 1959. The failed rock mass rapidly fragmented as it descended the slope towards Madison River. Part of the mass remained relatively intact as it moved on a layer of pulverised debris. The main slide was followed by several debris slides, slumps, and rockfalls. The slide debris was extensively modified soon after the disaster by the US Army Corps of Engineers to provide a stable outflow channel from newly formed Earthquake Lake. Our modelling and observations show that the landslide occurred as a result of long-term damage of the slope induced by fluvial undercutting, erosion, weathering, and past seismicity, and due to the short-term triggering effect of the 1959 earthquake. Static models suggest the slope was stable prior to the 1959 earthquake; failure would have required a significant reduction in material strength. Preliminary dynamic models indicate that repeated seismic loading was a critical process for catastrophic failure. Although the ridge geometry and existing tension cracks in the initiation zone amplified ground motions, the most important factors in initiating failure were pre-existing discontinuities and seismically induced

  13. Integration of Electric Resistivity Profile and Infiltrometer Measurements to Calibrate a Numerical Model of Vertical Flow in Fractured and Karstic Limestone.

    NASA Astrophysics Data System (ADS)

    Caputo, M. C.; de Carlo, L.; Masciopinto, C.; Nimmo, J. R.

    2007-12-01

    Karstic and fractured aquifers are among the most important drinking water resources. At the same time, they are particularly vulnerable to contamination. A detailed scientific knowledge of the behavior of these aquifers is essential for the development of sustainable groundwater management concepts. Due to their special characteristics of extreme anisotropy and heterogeneity, research aimed at a better understanding of flow, solute transport, and biological processes in these hydrogeologic systems is an important scientific challenge. This study integrates a geophysical technique with an infiltrometer test to better calibrate a mathematical model that quantifies the vertical flow in karstic and fractured limestone overlying the deep aquifer of Alta Murgia (Southern Italy). Knowledge of the rate of unsaturated zone percolation is needed to investigate the vertical migration of pollutants and the vulnerability of the aquifer. Sludge waste deposits in the study area have caused soil-subsoil contamination with toxics. The experimental test consisted of infiltrometer flow measurements, more commonly utilized for unconsolidated granular porous media, during which subsoil electric resistivity data were collected. A ring infiltrometer 2 m in diameter and 0.3 m high was sealed to the ground with gypsum. This large diameter yielded infiltration data representative of the anisotropic and heterogeneous rock, which could not be sampled adequately with a small ring. The subsurface resistivity was measured using a Wenner-Schlumberger electrode array. Vertical movement of water in a fracture plane under unsaturated conditions has been investigated by means of a numerical model. The finite difference method was used to solve the flow equations. An internal iteration method was used at every time step to evaluate the nodal value of the pressure head, in agreement with the mass- balance equation and the characteristic functional relationships of the coefficients.

  14. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  15. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  16. An equivalent domain integral method in the two-dimensional analysis of mixed mode crack problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Shivakumar, K. N.

    1990-01-01

    An equivalent domain integral (EDI) method for calculating J-integrals for two-dimensional cracked elastic bodies is presented. The details of the method and its implementation are presented for isoparametric elements. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented.

  17. How to accurately bypass damage

    PubMed Central

    Broyde, Suse; Patel, Dinshaw J.

    2016-01-01

    Ultraviolet radiation can cause cancer through DNA damage — specifically, by linking adjacent thymine bases. Crystal structures show how the enzyme DNA polymerase η accurately bypasses such lesions, offering protection. PMID:20577203

  18. FASTSIM2: a second-order accurate frictional rolling contact algorithm

    NASA Astrophysics Data System (ADS)

    Vollebregt, E. A. H.; Wilders, P.

    2011-01-01

    In this paper we consider the frictional (tangential) steady rolling contact problem. We confine ourselves to the simplified theory, instead of using full elastostatic theory, in order to be able to compute results fast, as needed for on-line application in vehicle system dynamics simulation packages. The FASTSIM algorithm is the leading technology in this field and is employed in all dominant railway vehicle system dynamics packages (VSD) in the world. The main contribution of this paper is a new version "FASTSIM2" of the FASTSIM algorithm, which is second-order accurate. This is relevant for VSD, because with the new algorithm 16 times less grid points are required for sufficiently accurate computations of the contact forces. The approach is based on new insights in the characteristics of the rolling contact problem when using the simplified theory, and on taking precise care of the contact conditions in the numerical integration scheme employed.

  19. Application of boundary integral method to elastic analysis of V-notched beams

    NASA Technical Reports Server (NTRS)

    Rzasnicki, W.; Mendelson, A.; Albers, L. U.

    1973-01-01

    A semidirect boundary integral method, using Airy's stress function and its derivatives in Green's boundary integral formula, is used to obtain an accurate numerical solution for elastic stress and strain fields in V-notched beams in pure bending. The proper choice of nodal spacing on the boundary is shown to be necessary to achieve an accurate stress field in the vicinity of the tip of the notch. Excellent agreement is obtained with the results of the collocation method of solution.

  20. Accurate determination of characteristic relative permeability curves

    NASA Astrophysics Data System (ADS)

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  1. HydroSphere: Fully-Integrated, Surface/Subsurface Numerical Model for Watershed Analysis of Hydrologic, Water Quality and Sedimentation Processes

    NASA Astrophysics Data System (ADS)

    Matanga, G. B.; Nelson, K. E.; Sudicky, E.; Therrien, R.; Panday, S.; McLaren, R.; Demarco, D.; Gessford, L.

    2004-12-01

    A distributed, physically based and fully-coupled surface/subsurface numerical model, HydroSphere, has recently been developed for watershed analysis of hydrologic and water quality processes. It accounts for flow and transport in lateral two-dimensional surface water, one-dimensional tile drains and three-dimensional variably-saturated subsurface water. One-, two- and three-dimensional forms of the advection-dispersion equation are used to describe solute transport in the tile drains, surface water and subsurface water, respectively. Full integration of the surface, tile-drain and subsurface water regimes is achieved by assembling and solving one system of discrete algebraic equations, such that surface flow rates and water depths, tile-drain flow rates and water depths, subsurface pressure heads, saturations and velocities, as well as water fluxes between continua, are determined simultaneously. Likewise, discrete advective-dispersive transport equations for the various continua are solved simultaneously to obtain the solute concentrations in the surface, tile-drain and subsurface systems. One of the major issues calling for capabilities of surface/subsurface water interactions, water quality and erosion/sedimentation is the optimal management of water supply for fish and agricultural irrigation. For example, the USGS has demonstrated that the massive September 2002 fish-kill in the Klamath River Basin was caused by low 2002 streamflows and the resulting high water temperatures. The streams in the Klamath River Basin are fed primarily by ground water. The 2002 streamflows were lower than the flows predicted by Bureau of Reclamation based on the snowpack data alone, neglecting subsurface water data. It is also well-known that erosion/sedimentation processes impair fish habitat by impacting spawning gravel areas and upstream migration to spawning areas. The models currently being applied in the Klamath River Basin and in all Bureau of Reclamation Regions completely

  2. Accurate ω-ψ Spectral Solution of the Singular Driven Cavity Problem

    NASA Astrophysics Data System (ADS)

    Auteri, F.; Quartapelle, L.; Vigevano, L.

    2002-08-01

    This article provides accurate spectral solutions of the driven cavity problem, calculated in the vorticity-stream function representation without smoothing the corner singularities—a prima facie impossible task. As in a recent benchmark spectral calculation by primitive variables of Botella and Peyret, closed-form contributions of the singular solution for both zero and finite Reynolds numbers are subtracted from the unknown of the problem tackled here numerically in biharmonic form. The method employed is based on a split approach to the vorticity and stream function equations, a Galerkin-Legendre approximation of the problem for the perturbation, and an evaluation of the nonlinear terms by Gauss-Legendre numerical integration. Results computed for Re=0, 100, and 1000 compare well with the benchmark steady solutions provided by the aforementioned collocation-Chebyshev projection method. The validity of the proposed singularity subtraction scheme for computing time-dependent solutions is also established.

  3. Numerical analysis of the asymptotic two-point boundary value solution for N-body trajectories.

    NASA Technical Reports Server (NTRS)

    Lancaster, J. E.; Allemann, R. A.

    1972-01-01

    Previously published asymptotic solutions for lunar and interplanetary trajectories have been modified and combined to formulate a general analytical boundary value solution applicable to a broad class of trajectory problems. In addition, the earlier first-order solutions have been extended to second-order to determine if improved accuracy is possible. Comparisons between the asymptotic solution and numerical integration for several lunar and interplanetary trajectories show that the asymptotic solution is generally quite accurate. Also, since no iterations are required, a solution to the boundary value problem is obtained in a fraction of the time required for numerically integrated solutions.

  4. GO2OGS 1.0: a versatile workflow to integrate complex geological information with fault data into numerical simulation models

    NASA Astrophysics Data System (ADS)

    Fischer, T.; Naumov, D.; Sattler, S.; Kolditz, O.; Walther, M.

    2015-11-01

    We offer a versatile workflow to convert geological models built with the ParadigmTM GOCAD© (Geological Object Computer Aided Design) software into the open-source VTU (Visualization Toolkit unstructured grid) format for usage in numerical simulation models. Tackling relevant scientific questions or engineering tasks often involves multidisciplinary approaches. Conversion workflows are needed as a way of communication between the diverse tools of the various disciplines. Our approach offers an open-source, platform-independent, robust, and comprehensible method that is potentially useful for a multitude of environmental studies. With two application examples in the Thuringian Syncline, we show how a heterogeneous geological GOCAD model including multiple layers and faults can be used for numerical groundwater flow modeling, in our case employing the OpenGeoSys open-source numerical toolbox for groundwater flow simulations. The presented workflow offers the chance to incorporate increasingly detailed data, utilizing the growing availability of computational power to simulate numerical models.

  5. Determining the Numerical Stability of Quantum Chemistry Algorithms.

    PubMed

    Knizia, Gerald; Li, Wenbin; Simon, Sven; Werner, Hans-Joachim

    2011-08-01

    We present a simple, broadly applicable method for determining the numerical properties of quantum chemistry algorithms. The method deliberately introduces random numerical noise into computations, which is of the same order of magnitude as the floating point precision. Accordingly, repeated runs of an algorithm give slightly different results, which can be analyzed statistically to obtain precise estimates of its numerical stability. This noise is produced by automatic code injection into regular compiler output, so that no substantial programming effort is required, only a recompilation of the affected program sections. The method is applied to investigate: (i) the numerical stability of the three-center Obara-Saika integral evaluation scheme for high angular momenta, (ii) if coupled cluster perturbative triples can be evaluated with single precision arithmetic, (iii) how to implement the density fitting approximation in Møller-Plesset perturbation theory (MP2) most accurately, and (iv) which parts of density fitted MP2 can be safely evaluated with single precision arithmetic. In the integral case, we find a numerical instability in an equation that is used in almost all integral programs. Due to the results of (ii) and (iv), we conjecture that single precision arithmetic can be applied whenever a calculation is done in an orthogonal basis set and excessively long linear sums are avoided. PMID:26606614

  6. Numerical evaluation of the incomplete airy functions and their application to high frequency scattering and diffraction

    NASA Technical Reports Server (NTRS)

    Constantinides, E. D.; Marhefka, R. J.

    1992-01-01

    The incomplete Airy integrals serve as canonical functions for the uniform ray optical solutions to several high frequency scattering and diffraction problems that involve a class of integrals characterized by two stationary points that are arbitrarily close to one another or to an integration endpoint. Integrals of such analytical properties describe transition region phenomena associated with composite shadow boundaries. An efficient and accurate method for computing the incomplete Airy functions would make the solutions to such problems useful for engineering purposes. Here, a convergent series solution form for the incomplete Airy functions is derived. Asymptotic expansions involving several terms were also developed and serve as large argument approximations. The combination of the series solution form with the asymptotic formulae provides for an efficient and accurate computation of the incomplete Airy functions. Validation of accuracy is accomplished using direct numerical integration data.

  7. A RANS/DES Numerical Procedure for Axisymmetric Flows with and without Strong Rotation

    SciTech Connect

    Andrade, A J

    2007-10-30

    A RANS/DES numerical procedure with an extended Lax-Wendroff control-volume scheme and turbulence model is described for the accurate simulation of internal/external axisymmetric flow with and without strong rotation. This new procedure is an extension, from Cartesian to cylindrical coordinates, of (1) a second order accurate multi-grid, control-volume integration scheme, and (2) a k-{omega} turbulence model. This paper outlines both the axisymmetric corrections to the mentioned numerical schemes and the developments of techniques pertaining to numerical dissipation, multi-block connectivity, parallelization, etc. Furthermore, analytical and experimental case studies are presented to demonstrate accuracy and computational efficiency. Notes are also made toward numerical stability of highly rotational flows.

  8. Explicit numerical solutions of a microbial survival model under nonisothermal conditions.

    PubMed

    Zhu, Si; Chen, Guibing

    2016-03-01

    Differential equations used to describe the original and modified Geeraerd models were, respectively, simplified into an explicit equation in which the integration of the specific inactivation rate with respect to time was numerically approximated using the Simpson's rule. The explicit numerical solutions were then used to simulate microbial survival curves and fit nonisothermal survival data for identifying model parameters in Microsoft Excel. The results showed that the explicit numerical solutions provided an easy way to accurately simulate microbial survival and estimate model parameters from nonisothermal survival data using the Geeraerd models. PMID:27004117

  9. Zero initial partial derivatives of satellite orbits with respect to force parameters nullify the mathematical basis of the numerical integration method for the determination of standard gravity models from space geodetic measurements

    NASA Astrophysics Data System (ADS)

    Xu, Peiliang

    2015-04-01

    Satellite orbits have been routinely used to produce models of the Earth's gravity field. The numerical integration method is most widely used by almost all major institutions to determine standard gravity models from space geodetic measurements. As a basic component of the method, the partial derivatives of a satellite orbit with respect to the force parameters to be determined, namely, the unknown harmonic coefficients of the gravitational model, have been first computed by setting the initial values of partial derivatives to zero. In this talk, we first design some simple mathematical examples to show that setting the initial values of partial derivatives to zero is generally erroneous mathematically. We then prove that it is prohibited physically. In other words, setting the initial values of partial derivatives to zero violates the physics of motion of celestial bodies. To conclude, the numerical integration method, as is widely used today by major institutions to produce standard satellite gravity models, is simply incorrect mathematically. As a direct consequence, further work is required to confirm whether the numerical integration method can still be used as a mathematical foundation to produce standard satellite gravity models. More details can be found in Xu (2009, Sci China Ser D-Earth Sci, 52, 562-566).

  10. Progress in fast, accurate multi-scale climate simulations

    DOE PAGESBeta

    Collins, W. D.; Johansen, H.; Evans, K. J.; Woodward, C. S.; Caldwell, P. M.

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  11. Progress in fast, accurate multi-scale climate simulations

    SciTech Connect

    Collins, W. D.; Johansen, H.; Evans, K. J.; Woodward, C. S.; Caldwell, P. M.

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.

  12. Accurate analysis of multicomponent fuel spray evaporation in turbulent flow

    NASA Astrophysics Data System (ADS)

    Rauch, Bastian; Calabria, Raffaela; Chiariello, Fabio; Le Clercq, Patrick; Massoli, Patrizio; Rachner, Michael

    2012-04-01

    The aim of this paper is to perform an accurate analysis of the evaporation of single component and binary mixture fuels sprays in a hot weakly turbulent pipe flow by means of experimental measurement and numerical simulation. This gives a deeper insight into the relationship between fuel composition and spray evaporation. The turbulence intensity in the test section is equal to 10%, and the integral length scale is three orders of magnitude larger than the droplet size while the turbulence microscale (Kolmogorov scales) is of same order as the droplet diameter. The spray produced by means of a calibrated droplet generator was injected in a gas flow electrically preheated. N-nonane, isopropanol, and their mixtures were used in the tests. The generalized scattering imaging technique was applied to simultaneously determine size, velocity, and spatial location of the droplets carried by the turbulent flow in the quartz tube. The spray evaporation was computed using a Lagrangian particle solver coupled to a gas-phase solver. Computations of spray mean diameter and droplet size distributions at different locations along the pipe compare very favorably with the measurement results. This combined research tool enabled further investigation concerning the influencing parameters upon the evaporation process such as the turbulence, droplet internal mixing, and liquid-phase thermophysical properties.

  13. Progress in Fast, Accurate Multi-scale Climate Simulations

    SciTech Connect

    Collins, William D; Johansen, Hans; Evans, Katherine J; Woodward, Carol S.; Caldwell, Peter

    2015-01-01

    We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.

  14. Fast and accurate Coulomb calculation with Gaussian functions.

    PubMed

    Füsti-Molnár, László; Kong, Jing

    2005-02-15

    Coulomb interaction is one of the major time-consuming components in a density functional theory (DFT) calculation. In the last decade, dramatic progresses have been made to improve the efficiency of Coulomb calculation, including continuous fast multipole method (CFMM) and J-engine method, all developed first inside Q-Chem. The most recent development is the advent of Fourier transform Coulomb method developed by Fusti-Molnar and Pulay, and an improved version of the method has been recently implemented in Q-Chem. It replaces the least efficient part of the previous Coulomb methods with an accurate numerical integration scheme that scales in O(N2) instead of O(N4) with the basis size. The result is a much smaller slope in the linear scaling with respect to the molecular size and we will demonstrate through a series of benchmark calculations that it speeds up the calculation of Coulomb energy by several folds over the efficient existing code, i.e., the combination of CFMM and J-engine, without loss of accuracy. Furthermore, we will show that it is complementary to the latter and together the three methods offer the best performance for Coulomb part of DFT calculations, making the DFT calculations affordable for very large systems involving thousands of basis functions. PMID:15743222

  15. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2003-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  16. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2002-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  17. Accurate momentum transfer cross section for the attractive Yukawa potential

    SciTech Connect

    Khrapak, S. A.

    2014-04-15

    Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within ±2% in the regime relevant for ion-particle collisions in complex (dusty) plasmas.

  18. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  19. Evaluating Definite Integrals on a Computer Theory and Practice. Applications of Numerical Analysis. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Unit 432.

    ERIC Educational Resources Information Center

    Wagon, Stanley

    This document explores two methods of obtaining numbers that are approximations of certain definite integrals. The methods covered are the Trapezoidal Rule and Romberg's method. Since the formulas used involve considerable calculation, a computer is normally used. Some of the problems and pitfalls of computer implementation, such as roundoff…

  20. Toward Accurate and Quantitative Comparative Metagenomics.

    PubMed

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  1. How Accurately can we Calculate Thermal Systems?

    SciTech Connect

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  2. Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2009-01-01

    Recent advances in numerical relativity have fueled an explosion of progress in understanding the predictions of Einstein's theory of gravity, General Relativity, for the strong field dynamics, the gravitational radiation wave forms, and consequently the state of the remnant produced from the merger of compact binary objects. I will review recent results from the field, focusing on mergers of two black holes.

  3. Predict amine solution properties accurately

    SciTech Connect

    Cheng, S.; Meisen, A.; Chakma, A.

    1996-02-01

    Improved process design begins with using accurate physical property data. Especially in the preliminary design stage, physical property data such as density viscosity, thermal conductivity and specific heat can affect the overall performance of absorbers, heat exchangers, reboilers and pump. These properties can also influence temperature profiles in heat transfer equipment and thus control or affect the rate of amine breakdown. Aqueous-amine solution physical property data are available in graphical form. However, it is not convenient to use with computer-based calculations. Developed equations allow improved correlations of derived physical property estimates with published data. Expressions are given which can be used to estimate physical properties of methyldiethanolamine (MDEA), monoethanolamine (MEA) and diglycolamine (DGA) solutions.

  4. Accurate thickness measurement of graphene

    NASA Astrophysics Data System (ADS)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  5. Numerical Methods for Stochastic Partial Differential Equations

    SciTech Connect

    Sharp, D.H.; Habib, S.; Mineev, M.B.

    1999-07-08

    This is the final report of a Laboratory Directed Research and Development (LDRD) project at the Los Alamos National laboratory (LANL). The objectives of this proposal were (1) the development of methods for understanding and control of spacetime discretization errors in nonlinear stochastic partial differential equations, and (2) the development of new and improved practical numerical methods for the solutions of these equations. The authors have succeeded in establishing two methods for error control: the functional Fokker-Planck equation for calculating the time discretization error and the transfer integral method for calculating the spatial discretization error. In addition they have developed a new second-order stochastic algorithm for multiplicative noise applicable to the case of colored noises, and which requires only a single random sequence generation per time step. All of these results have been verified via high-resolution numerical simulations and have been successfully applied to physical test cases. They have also made substantial progress on a longstanding problem in the dynamics of unstable fluid interfaces in porous media. This work has lead to highly accurate quasi-analytic solutions of idealized versions of this problem. These may be of use in benchmarking numerical solutions of the full stochastic PDEs that govern real-world problems.

  6. Numerical modeling and environmental isotope methods in integrated mine-water management: a case study from the Witwatersrand basin, South Africa

    NASA Astrophysics Data System (ADS)

    Mengistu, Haile; Tessema, Abera; Abiye, Tamiru; Demlie, Molla; Lin, Haili

    2015-05-01

    Improved groundwater flow conceptualization was achieved using environmental stable isotope (ESI) and hydrochemical information to complete a numerical groundwater flow model with reasonable certainty. The study aimed to assess the source of excess water at a pumping shaft located near the town of Stilfontein, North West Province, South Africa. The results indicate that the water intercepted at Margaret Shaft comes largely from seepage of a nearby mine tailings dam (Dam 5) and from the upper dolomite aquifer. If pumping at the shaft continues at the current rate and Dam 5 is decommissioned, neighbouring shallow farm boreholes would dry up within approximately 10 years. Stable isotope data of shaft water indicate that up to 50 % of the pumped water from Margaret Shaft is recirculated, mainly from Dam 5. The results are supplemented by tritium data, demonstrating that recent recharge is taking place through open fractures as well as man-made underground workings, whereas hydrochemical data of fissure water samples from roughly 950 m below ground level exhibit mine-water signatures. Pumping at the shaft, which captures shallow groundwater as well as seepage from surface dams, is a highly recommended option for preventing flooding of downstream mines. The results of this research highlight the importance of additional methods (ESI and hydrochemical analyses) to improve flow conceptualization and numerical modelling.

  7. Numerical computation of 2D sommerfeld integrals— A novel asymptotic extraction technique

    NASA Astrophysics Data System (ADS)

    Dvorak, Steven L.; Kuester, Edward F.

    1992-02-01

    The accurate and efficient computation of the elements in the impedance matrix is a crucial step in the application of Galerkin's method to the analysis of planar structures. As was demonstrated in a previous paper, it is possible to decompose the angular integral, in the polar representation for the 2D Sommerfeld integrals, in terms of incomplete Lipschitz-Hankel integrals (ILHIs) when piecewise sinusoidal basis functions are employed. Since Bessel series expansions can be used to compute these ILHIs, a numerical integration of the inner angular integral is not required. This technique provides an efficient method for the computation of the inner angular integral; however, the outer semi-infinite integral still converges very slowly when a real axis integration is applied. Therefore, it is very difficult to compute the impedance elements accurately and efficiently. In this paper, it is shown that this problem can be overcome by using the ILHI representation for the angular integral to develop a novel asymptotic extraction technique for the outer semi-infinite integral. The usefulness of this asymptotic extraction technique is demonstrated by applying it to the analysis of a printed strip dipole antenna in a layered medium.

  8. Accurate and rapid micromixer for integrated microfluidic devices

    SciTech Connect

    Van Dam, R. Michael; Liu, Kan; Shen, Kwang -Fu Clifton; Tseng, Hsian -Rong

    2015-09-22

    The invention may provide a microfluidic mixer having a droplet generator and a droplet mixer in selective fluid connection with the droplet generator. The droplet generator comprises first and second fluid chambers that are structured to be filled with respective first and second fluids that can each be held in isolation for a selectable period of time. The first and second fluid chambers are further structured to be reconfigured into a single combined chamber to allow the first and second fluids in the first and second fluid chambers to come into fluid contact with each other in the combined chamber for a selectable period of time prior to being brought into the droplet mixer.

  9. On the reliability of gravitational N-body integrations

    NASA Technical Reports Server (NTRS)

    Quinlan, Gerald D.; Tremaine, Scott

    1992-01-01

    In a self-gravitating system of point particles such as a spherical star cluster, small disturbances to an orbit grow exponentially on a time-scale comparable with the crossing time. The results of N-body integrations are therefore extremely sensitive to numerical errors: in practice it is almost impossible to follow orbits of individual particles accurately for more than a few crossing times. We demonstrate that numerical orbits in the gravitational N-body problem are often shadowed by true orbits for many crossing times. This result enhances our confidence in the use of N-body integrations to study the evolution of stellar systems.

  10. The numerical analysis of a turbulent compressible jet

    NASA Astrophysics Data System (ADS)

    Debonis, James Raymond

    2000-10-01

    A numerical method to simulate high Reynolds number jet flows was formulated and applied to gain a better understanding of the flow physics. Large-eddy simulation was chosen as the most promising approach to model the turbulent structures due to its compromise between accuracy and computational expense. The filtered Navier-Stokes equations were developed including a total energy form of the energy equation. Sub-grid scale models for the momentum and energy equations were adapted from compressible forms of Smagorinsky's original model. The effect of using disparate temporal and spatial accuracy in a numerical scheme was discovered through one-dimensional model problems and a new uniformly fourth-order accurate numerical method was developed. Results from two and three dimensional validation exercises show that the code accurately reproduces both viscous and inviscid flows. Numerous axisymmetric jet simulations were performed to investigate the effect of grid resolution, numerical scheme, exit boundary conditions and sub-grid scale modeling on the solution and the results were used to guide the three-dimensional calculations. Three-dimensional calculations of a Mach 1.4 jet showed that this LES simulation accurately captures the physics of the turbulent flow. The agreement with experimental data relatively good and is much better than results in the current literature. Turbulent intensities indicate that the turbulent structures at this level of modeling are not isotropic and this information could lend itself to the development of improved sub-grid scale models for LES and turbulence models for RANS simulations. A two point correlation technique was used to quantify the turbulent structures. Two point space correlations were used to obtain a measure of the integral length scale, which proved to be approximately ½Dj. Two point space-time correlations were used to obtain the convection velocity for the turbulent structures. This velocity ranged from 0.57 to 0.71 Uj.

  11. An integrated strategy for rapid and accurate determination of free and cell-bound microcystins and related peptides in natural blooms by liquid chromatography-electrospray-high resolution mass spectrometry and matrix-assisted laser desorption/ionization time-of-flight/time-of-flight mass spectrometry using both positive and negative ionization modes.

    PubMed

    Flores, Cintia; Caixach, Josep

    2015-08-14

    An integrated high resolution mass spectrometry (HRMS) strategy has been developed for rapid and accurate determination of free and cell-bound microcystins (MCs) and related peptides in water blooms. The natural samples (water and algae) were filtered for independent analysis of aqueous and sestonic fractions. These fractions were analyzed by MALDI-TOF/TOF-MS and ESI-Orbitrap-HCD-MS. MALDI, ESI and the study of fragmentation sequences have been provided crucial structural information. The potential of combined positive and negative ionization modes, full scan and fragmentation acquisition modes (TOF/TOF and HCD) by HRMS and high resolution and accurate mass was investigated in order to allow unequivocal determination of MCs. Besides, a reliable quantitation has been possible by HRMS. This composition helped to decrease the probability of false positives and negatives, as alternative to commonly used LC-ESI-MS/MS methods. The analysis was non-target, therefore covered the possibility to analyze all MC analogs concurrently without any pre-selection of target MC. Furthermore, archived data was subjected to retrospective "post-targeted" analysis and a screening of other potential toxins and related peptides as anabaenopeptins in the samples was done. Finally, the MS protocol and identification tools suggested were applied to the analysis of characteristic water blooms from Spanish reservoirs. PMID:26141269

  12. Tool for the Integrated Dynamic Numerical Propulsion System Simulation (NPSS)/Turbine Engine Closed-Loop Transient Analysis (TTECTrA) User's Guide

    NASA Technical Reports Server (NTRS)

    Chin, Jeffrey C.; Csank, Jeffrey T.

    2016-01-01

    The Tool for Turbine Engine Closed-Loop Transient Analysis (TTECTrA ver2) is a control design tool thatenables preliminary estimation of transient performance for models without requiring a full nonlinear controller to bedesigned. The program is compatible with subsonic engine models implemented in the MATLAB/Simulink (TheMathworks, Inc.) environment and Numerical Propulsion System Simulation (NPSS) framework. At a specified flightcondition, TTECTrA will design a closed-loop controller meeting user-defined requirements in a semi or fully automatedfashion. Multiple specifications may be provided, in which case TTECTrA will design one controller for each, producing acollection of controllers in a single run. Each resulting controller contains a setpoint map, a schedule of setpointcontroller gains, and limiters; all contributing to transient characteristics. The goal of the program is to providesteady-state engine designers with more immediate feedback on the transient engine performance earlier in the design cycle.

  13. Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis.

    PubMed

    Abbasi, Mahdi

    2014-01-01

    Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N (2)log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR. PMID:24696808

  14. Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis

    PubMed Central

    Abbasi, Mahdi

    2014-01-01

    Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N2log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR. PMID:24696808

  15. Numerical Analysis of Integral Characteristics for the Condenser Setups of Independent Power-Supply Sources with the Closed-Looped Thermodynamic Cycle

    NASA Astrophysics Data System (ADS)

    Vysokomorny, Vladimir S.; Vysokomornaya, Vladimir S.

    2016-02-01

    The mathematical model of heat and mass transfer processes with phase transition is developed. It allows analyzing of integral characteristics for the condenser setup of independent power-supply plant with the organic Rankine cycle. Different kinds of organic liquids can be used as a coolant and working substance. The temperatures of the working liquid at the condenser outlet under different values of outside air temperature are determined. The comparative analysis of the utilization efficiency of different cooling systems and organic coolants is carried out.

  16. Accurate ab Initio Spin Densities

    PubMed Central

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740]. PMID:22707921

  17. An Evaluation of the "Treatment Integrity Planning Protocol" and Two Schedules of Treatment Integrity Self-Report: Impact on Implementation and Report Accuracy

    ERIC Educational Resources Information Center

    Hagermoser Sanetti, Lisa M.; Kratochwill, Thomas R.

    2011-01-01

    The evidence-based practice movement has focused on identifying, disseminating, and promoting the adoption of evidence-based interventions. Despite advances in this movement, numerous barriers, such as the lack of treatment integrity assessment methods, remain as challenges in implementation. Accurate teacher self-report could be an efficient…

  18. High-order accurate multi-phase simulations: building blocks and whats tricky about them

    NASA Astrophysics Data System (ADS)

    Kummer, Florian

    2015-11-01

    We are going to present a high-order numerical method for multi-phase flow problems, which employs a sharp interface representation by a level-set and an extended discontinuous Galerkin (XDG) discretization for the flow properties. The shape of the XDG basis functions is dynamically adapted to the position of the fluid interface, so that the spatial approximation space can represent jumps in pressure and kinks in velocity accurately. By this approach, the `hp-convergence' property of the classical discontinuous Galerkin (DG) method can be preserved for the low-regularity, discontinuous solutions, such as those appearing in multi-phase flows. Within the past years, several building blocks of such a method were presented: this includes numerical integration on cut-cells, the spatial discretization by the XDG method, precise evaluation of curvature and level-set algorithms tailored to the special requirements of XDG-methods. The presentation covers a short review on these building-block and their integration into a full multi-phase solver. A special emphasis is put on the discussion of the several pitfalls one may expire in the formulation of such a solver. German Research Foundation.

  19. Full vectorial simulation of multilayer anisotropic waveguides with an accurate and automated finite-element program.

    PubMed

    Zhao, A P; Cvetkovic, S R

    1994-08-20

    An efficient, accurate, and automated vectorial finite-element software package (named WAVEGIDE), which is implemented within a PDE/Protran problem-solving environment, has been extended to general multilayer anisotropic waveguides. With our system, through an interactive question-and-answer session, the problem can be simply defined with high-level PDE/Protran commands. The problem can then be solved easily and quickly by the main processor within this intelligent environment. In particular, in our system the eigenvalue of waveguide problems may be either a propagation constant (β) or an operated light frequency (F). Furthermore, the cutoff frequencies of propagation modes in waveguides can be calculated. As an application of this approach, numerical results for both scalar and hybrid modes in multilayer anisotropic waveguides are presented and are also compared with results obtained with the domain-integral method. These results clearly illustrate the unique flexibility, accuracy, and the ease of use f the WAVEGIDE program. PMID:20935964

  20. Geometrically invariant and high capacity image watermarking scheme using accurate radial transform

    NASA Astrophysics Data System (ADS)

    Singh, Chandan; Ranade, Sukhjeet K.

    2013-12-01

    Angular radial transform (ART) is a region based descriptor and possesses many attractive features such as rotation invariance, low computational complexity and resilience to noise which make them more suitable for invariant image watermarking than that of many transform domain based image watermarking techniques. In this paper, we introduce ART for fast and geometrically invariant image watermarking scheme with high embedding capacity. We also develop an accurate and fast framework for the computation of ART coefficients based on Gaussian quadrature numerical integration, 8-way symmetry/anti-symmetry properties and recursive relations for the calculation of sinusoidal kernel functions. ART coefficients so computed are then used for embedding the binary watermark using dither modulation. Experimental studies reveal that the proposed watermarking scheme not only provides better robustness against geometric transformations and other signal processing distortions, but also has superior advantages over the existing ones in terms of embedding capacity, speed and visual imperceptibility.

  1. Exploring accurate Poisson–Boltzmann methods for biomolecular simulations

    PubMed Central

    Wang, Changhao; Wang, Jun; Cai, Qin; Li, Zhilin; Zhao, Hong-Kai; Luo, Ray

    2013-01-01

    Accurate and efficient treatment of electrostatics is a crucial step in computational analyses of biomolecular structures and dynamics. In this study, we have explored a second-order finite-difference numerical method to solve the widely used Poisson–Boltzmann equation for electrostatic analyses of realistic bio-molecules. The so-called immersed interface method was first validated and found to be consistent with the classical weighted harmonic averaging method for a diversified set of test biomolecules. The numerical accuracy and convergence behaviors of the new method were next analyzed in its computation of numerical reaction field grid potentials, energies, and atomic solvation forces. Overall similar convergence behaviors were observed as those by the classical method. Interestingly, the new method was found to deliver more accurate and better-converged grid potentials than the classical method on or nearby the molecular surface, though the numerical advantage of the new method is reduced when grid potentials are extrapolated to the molecular surface. Our exploratory study indicates the need for further improving interpolation/extrapolation schemes in addition to the developments of higher-order numerical methods that have attracted most attention in the field. PMID:24443709

  2. Fast and Provably Accurate Bilateral Filtering

    NASA Astrophysics Data System (ADS)

    Chaudhury, Kunal N.; Dabhade, Swapnil D.

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires $O(S)$ operations per pixel, where $S$ is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to $O(1)$ per pixel for any arbitrary $S$. The algorithm has a simple implementation involving $N+1$ spatial filterings, where $N$ is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to to estimate the order $N$ required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with state-of-the-art methods in terms of speed and accuracy.

  3. Fast and Provably Accurate Bilateral Filtering.

    PubMed

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  4. Accurate adiabatic correction in the hydrogen molecule

    SciTech Connect

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  5. Accurate adiabatic correction in the hydrogen molecule

    NASA Astrophysics Data System (ADS)

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-01

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  6. Integration of numerical models and geoinformatic techniques in the delimitation of a protection zone for the MGB 319 complex multi-aquifer system in southwest Poland

    NASA Astrophysics Data System (ADS)

    Gurwin, Jacek

    2015-09-01

    The study area, situated near the city of Wrocław in southwest Poland, is part of the hydrogeological system of the Quaternary/Neogene MGB 319, inclusive of a buried valley of high water potential, named the Bogdaszowice structure. This structure is an alternative source of water supply for the Wrocław city area. Numerical modelling is the most effective tool in establishing a groundwater protection strategy for Major Groundwater Basins (MGBs) in complex aquifer systems. In the present study, the first step was to assess the hydrodynamic conditions of the Radakowice groundwater intake by analyses of head contours, pathlines, average flow times and capture zones of particular wells. Subsequently, these results were used in combination with other data and compiled as GIS layers. The spatial distribution of hydraulic conductivity was based on the lithology of surface sediments. Other data sets such as the thickness of the unsaturated zone, average soil moisture and infiltration rate were taken either directly from the model or were calculated. Based on the input data obtained, vertical flow time calculations for every model cell were made. The final outcome is a map of the protection zone for the aquifer system of the MGB 319.

  7. Numerical investigation of tail buffet on F-18 aircraft

    NASA Technical Reports Server (NTRS)

    Rizk, Yehia M.; Guruswamy, Guru P.; Gee, Ken

    1992-01-01

    Numerical investigation of vortex induced tail buffet is conducted on the F-18 aircraft at high angles of attack. The Reynolds-averaged Navier-Stokes equations are integrated using a time-accurate, implicit procedure. A generalized overset zonal grid scheme is used to decompose the computational space around the complete aircraft with faired-over inlet. A weak coupling between the aerodynamics and structures is assumed to compute the structural oscillation of the flexible vertical tail. Time-accurate computations of the turbulent flow around the F-18 aircraft at 30 degrees angle of attack show the surface and off-surface flowfield details, including the unsteadiness created by the vortex burst and its interaction with the vertical twin tail which causes the tail buffet. The effect of installing a LEX fence on modifying the vortex structure upstream of the tail is also examined.

  8. The Validation of Complete Fourier Direct MR Method for Diffusion MRI via Biological and Numerical Phantoms

    PubMed Central

    Özcan, Alpay; Quirk, James D.; Wang, Yong; Wang, Qing; Sun, Peng; Spees, William M.; Song, Sheng–Kwei

    2012-01-01

    The equations of the Complete Fourier Direct (CFD) MR model are explicitly derived for diffusion weighted NMR experiments. The CFD–MR theory is validated by comparing a biological phantom constructed from nerve bundles and agar gel with its numerical implementation. The displacement integral distribution function estimated from the experimental data is in high agreement with the numerical phantom. CFD–MR’s ability to estimate accurately and fully spin diffusion properties demonstrated here, provides the experimental validation of the theoretical CFD–MR model. PMID:22255156

  9. Accurate adjoint design sensitivities for nano metal optics.

    PubMed

    Hansen, Paul; Hesselink, Lambertus

    2015-09-01

    We present a method for obtaining accurate numerical design sensitivities for metal-optical nanostructures. Adjoint design sensitivity analysis, long used in fluid mechanics and mechanical engineering for both optimization and structural analysis, is beginning to be used for nano-optics design, but it fails for sharp-cornered metal structures because the numerical error in electromagnetic simulations of metal structures is highest at sharp corners. These locations feature strong field enhancement and contribute strongly to design sensitivities. By using high-accuracy FEM calculations and rounding sharp features to a finite radius of curvature we obtain highly-accurate design sensitivities for 3D metal devices. To provide a bridge to the existing literature on adjoint methods in other fields, we derive the sensitivity equations for Maxwell's equations in the PDE framework widely used in fluid mechanics. PMID:26368483

  10. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  11. AN INTEGRAL EQUATION REPRESENTATION OF WIDE-BAND ELECTROMAGNETIC SCATTERING BY THIN SHEETS

    EPA Science Inventory

    An efficient, accurate numerical modeling scheme has been developed, based on the integral equation solution to compute electromagnetic (EM) responses of thin sheets over a wide frequency band. The thin-sheet approach is useful for simulating the EM response of a fracture system ...

  12. Efficient and accurate sound propagation using adaptive rectangular decomposition.

    PubMed

    Raghuvanshi, Nikunj; Narain, Rahul; Lin, Ming C

    2009-01-01

    Accurate sound rendering can add significant realism to complement visual display in interactive applications, as well as facilitate acoustic predictions for many engineering applications, like accurate acoustic analysis for architectural design. Numerical simulation can provide this realism most naturally by modeling the underlying physics of wave propagation. However, wave simulation has traditionally posed a tough computational challenge. In this paper, we present a technique which relies on an adaptive rectangular decomposition of 3D scenes to enable efficient and accurate simulation of sound propagation in complex virtual environments. It exploits the known analytical solution of the Wave Equation in rectangular domains, and utilizes an efficient implementation of the Discrete Cosine Transform on Graphics Processors (GPU) to achieve at least a 100-fold performance gain compared to a standard Finite-Difference Time-Domain (FDTD) implementation with comparable accuracy, while also being 10-fold more memory efficient. Consequently, we are able to perform accurate numerical acoustic simulation on large, complex scenes in the kilohertz range. To the best of our knowledge, it was not previously possible to perform such simulations on a desktop computer. Our work thus enables acoustic analysis on large scenes and auditory display for complex virtual environments on commodity hardware. PMID:19590105

  13. Accurate Mass Measurements in Proteomics

    SciTech Connect

    Liu, Tao; Belov, Mikhail E.; Jaitly, Navdeep; Qian, Weijun; Smith, Richard D.

    2007-08-01

    To understand different aspects of life at the molecular level, one would think that ideally all components of specific processes should be individually isolated and studied in details. Reductionist approaches, i.e., studying one biological event at a one-gene or one-protein-at-a-time basis, indeed have made significant contributions to our understanding of many basic facts of biology. However, these individual “building blocks” can not be visualized as a comprehensive “model” of the life of cells, tissues, and organisms, without using more integrative approaches.1,2 For example, the emerging field of “systems biology” aims to quantify all of the components of a biological system to assess their interactions and to integrate diverse types of information obtainable from this system into models that could explain and predict behaviors.3-6 Recent breakthroughs in genomics, proteomics, and bioinformatics are making this daunting task a reality.7-14 Proteomics, the systematic study of the entire complement of proteins expressed by an organism, tissue, or cell under a specific set of conditions at a specific time (i.e., the proteome), has become an essential enabling component of systems biology. While the genome of an organism may be considered static over short timescales, the expression of that genome as the actual gene products (i.e., mRNAs and proteins) is a dynamic event that is constantly changing due to the influence of environmental and physiological conditions. Exclusive monitoring of the transcriptomes can be carried out using high-throughput cDNA microarray analysis,15-17 however the measured mRNA levels do not necessarily correlate strongly with the corresponding abundances of proteins,18-20 The actual amount of functional proteins can be altered significantly and become independent of mRNA levels as a result of post-translational modifications (PTMs),21 alternative splicing,22,23 and protein turnover.24,25 Moreover, the functions of expressed

  14. Numeric simulation of plant signaling networks.

    PubMed

    Genoud, T; Trevino Santa Cruz, M B; Métraux, J P

    2001-08-01

    Plants have evolved an intricate signaling apparatus that integrates relevant information and allows an optimal response to environmental conditions. For instance, the coordination of defense responses against pathogens involves sophisticated molecular detection and communication systems. Multiple protection strategies may be deployed differentially by the plant according to the nature of the invading organism. These responses are also influenced by the environment, metabolism, and developmental stage of the plant. Though the cellular signaling processes traditionally have been described as linear sequences of events, it is now evident that they may be represented more accurately as network-like structures. The emerging paradigm can be represented readily with the use of Boolean language. This digital (numeric) formalism allows an accurate qualitative description of the signal transduction processes, and a dynamic representation through computer simulation. Moreover, it provides the required power to process the increasing amount of information emerging from the fields of genomics and proteomics, and from the use of new technologies such as microarray analysis. In this review, we have used the Boolean language to represent and analyze part of the signaling network of disease resistance in Arabidopsis. PMID:11500542

  15. Numeric Simulation of Plant Signaling Networks1

    PubMed Central

    Genoud, Thierry; Trevino Santa Cruz, Marcela B.; Métraux, Jean-Pierre

    2001-01-01

    Plants have evolved an intricate signaling apparatus that integrates relevant information and allows an optimal response to environmental conditions. For instance, the coordination of defense responses against pathogens involves sophisticated molecular detection and communication systems. Multiple protection strategies may be deployed differentially by the plant according to the nature of the invading organism. These responses are also influenced by the environment, metabolism, and developmental stage of the plant. Though the cellular signaling processes traditionally have been described as linear sequences of events, it is now evident that they may be represented more accurately as network-like structures. The emerging paradigm can be represented readily with the use of Boolean language. This digital (numeric) formalism allows an accurate qualitative description of the signal transduction processes, and a dynamic representation through computer simulation. Moreover, it provides the required power to process the increasing amount of information emerging from the fields of genomics and proteomics, and from the use of new technologies such as microarray analysis. In this review, we have used the Boolean language to represent and analyze part of the signaling network of disease resistance in Arabidopsis. PMID:11500542

  16. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  17. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  18. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  19. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  20. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  1. A new method to construct integrable approximations to nearly integrable systems in Celestial Mechanics: application to the Sitnikov problem

    NASA Astrophysics Data System (ADS)

    Hagel, Johannes

    2015-06-01

    The Sitnikov problem for nonzero primaries eccentricities is a non-integrable dynamical system. In this contribution, a second dynamical system close to the original one but being fully integrable is constructed. We denote this system by "approximating integrable system", and we will give a rigorous definition for it as well as for the "distance" between the integrable and the non-integrable system. The first integral of the approximating system is derived in closed form, and from this result, the most important system properties are found algebraically and compared to the ones of the Sitnikov problem obtained by numerical integration. It turns out that for the given range of the eccentricity and initial amplitude, the approximating system describes accurately the most important properties of the Sitnikov problem.

  2. Dynamical Approach Study of Spurious Numerics in Nonlinear Computations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Mansour, Nagi (Technical Monitor)

    2002-01-01

    The last two decades have been an era when computation is ahead of analysis and when very large scale practical computations are increasingly used in poorly understood multiscale complex nonlinear physical problems and non-traditional fields. Ensuring a higher level of confidence in the predictability and reliability (PAR) of these numerical simulations could play a major role in furthering the design, understanding, affordability and safety of our next generation air and space transportation systems, and systems for planetary and atmospheric sciences, and in understanding the evolution and origin of life. The need to guarantee PAR becomes acute when computations offer the ONLY way of solving these types of data limited problems. Employing theory from nonlinear dynamical systems, some building blocks to ensure a higher level of confidence in PAR of numerical simulations have been revealed by the author and world expert collaborators in relevant fields. Five building blocks with supporting numerical examples were discussed. The next step is to utilize knowledge gained by including nonlinear dynamics, bifurcation and chaos theories as an integral part of the numerical process. The third step is to design integrated criteria for reliable and accurate algorithms that cater to the different multiscale nonlinear physics. This includes but is not limited to the construction of appropriate adaptive spatial and temporal discretizations that are suitable for the underlying governing equations. In addition, a multiresolution wavelets approach for adaptive numerical dissipation/filter controls for high speed turbulence, acoustics and combustion simulations will be sought. These steps are corner stones for guarding against spurious numerical solutions that are solutions of the discretized counterparts but are not solutions of the underlying governing equations.

  3. Accurate pressure gradient calculations in hydrostatic atmospheric models

    NASA Technical Reports Server (NTRS)

    Carroll, John J.; Mendez-Nunez, Luis R.; Tanrikulu, Saffet

    1987-01-01

    A method for the accurate calculation of the horizontal pressure gradient acceleration in hydrostatic atmospheric models is presented which is especially useful in situations where the isothermal surfaces are not parallel to the vertical coordinate surfaces. The present method is shown to be exact if the potential temperature lapse rate is constant between the vertical pressure integration limits. The technique is applied to both the integration of the hydrostatic equation and the computation of the slope correction term in the horizontal pressure gradient. A fixed vertical grid and a dynamic grid defined by the significant levels in the vertical temperature distribution are employed.

  4. A general, accurate procedure for calculating molecular interaction force.

    PubMed

    Yang, Pinghai; Qian, Xiaoping

    2009-09-15

    The determination of molecular interaction forces, e.g., van der Waals force, between macroscopic bodies is of fundamental importance for understanding sintering, adhesion and fracture processes. In this paper, we develop an accurate, general procedure for van der Waals force calculation. This approach extends a surface formulation that converts a six-dimensional (6D) volume integral into a 4D surface integral for the force calculation. It uses non-uniform rational B-spline (NURBS) surfaces to represent object surfaces. Surface integrals are then done on the parametric domain of the NURBS surfaces. It has combined advantages of NURBS surface representation and surface formulation, including (1) molecular interactions between arbitrary-shaped objects can be represented and evaluated by the NURBS model further common geometries such as spheres, cones, planes can be represented exactly and interaction forces are thus calculated accurately; (2) calculation efficiency is improved by converting the volume integral to the surface integral. This approach is implemented and validated via its comparison with analytical solutions for simple geometries. Calculation of van der Waals force between complex geometries with surface roughness is also demonstrated. A tutorial on the NURBS approach is given in Appendix A. PMID:19596335

  5. Integration methods for molecular dynamics

    SciTech Connect

    Leimkuhler, B.J.; Reich, S.; Skeel, R.D.

    1996-12-31

    Classical molecular dynamics simulation of a macromolecule requires the use of an efficient time-stepping scheme that can faithfully approximate the dynamics over many thousands of timesteps. Because these problems are highly nonlinear, accurate approximation of a particular solution trajectory on meaningful time intervals is neither obtainable nor desired, but some restrictions, such as symplecticness, can be imposed on the discretization which tend to imply good long term behavior. The presence of a variety of types and strengths of interatom potentials in standard molecular models places severe restrictions on the timestep for numerical integration used in explicit integration schemes, so much recent research has concentrated on the search for alternatives that possess (1) proper dynamical properties, and (2) a relative insensitivity to the fastest components of the dynamics. We survey several recent approaches. 48 refs., 2 figs.

  6. Discretely disordered photonic bandgap structures: a more accurate invariant measure calculation

    NASA Astrophysics Data System (ADS)

    Kissel, Glen J.

    2009-02-01

    In the one-dimensional optical analog to Anderson localization, a periodically layered medium has one or more parameters randomly disordered. Such a randomized system can be modeled by an infinite product of 2x2 random transfer matrices with the upper Lyapunov exponent of the matrix product identified as the localization factor (inverse localization length) for the model. The theorem of Furstenberg allows us, at least theoretically, to calculate this upper Lyapunov exponent. In Furstenberg's formula we not only integrate with respect to the probability measure of the random matrices, but also with respect to the invariant probability measure of the direction of the vector propagated by the random matrices. This invariant measure is difficult to find analytically, and, as a result, the most successful approach is to determine the invariant measure numerically. A Monte Carlo simulation which uses accumulated bin counts to track the direction of the propagated vector through a long chain of random matrices does a good job of estimating the invariant probability measure, but with a level of uncertainty. A potentially more accurate numerical technique by Froyland and Aihara obtains the invariant measure as a left eigenvector of a large sparse matrix containing probability values determined by the action of the random matrices on input vectors. We first apply these two techniques to a random Fibonacci sequence whose Lyapunov exponent was determined by Viswanath. We then demonstrate these techniques on a quarter-wave stack model with binary discrete disorder in layer thickness, and compare results to the continuously disordered counterpart.

  7. Painfree and accurate Bayesian estimation of psychometric functions for (potentially) overdispersed data.

    PubMed

    Schütt, Heiko H; Harmeling, Stefan; Macke, Jakob H; Wichmann, Felix A

    2016-05-01

    The psychometric function describes how an experimental variable, such as stimulus strength, influences the behaviour of an observer. Estimation of psychometric functions from experimental data plays a central role in fields such as psychophysics, experimental psychology and in the behavioural neurosciences. Experimental data may exhibit substantial overdispersion, which may result from non-stationarity in the behaviour of observers. Here we extend the standard binomial model which is typically used for psychometric function estimation to a beta-binomial model. We show that the use of the beta-binomial model makes it possible to determine accurate credible intervals even in data which exhibit substantial overdispersion. This goes beyond classical measures for overdispersion-goodness-of-fit-which can detect overdispersion but provide no method to do correct inference for overdispersed data. We use Bayesian inference methods for estimating the posterior distribution of the parameters of the psychometric function. Unlike previous Bayesian psychometric inference methods our software implementation-psignifit 4-performs numerical integration of the posterior within automatically determined bounds. This avoids the use of Markov chain Monte Carlo (MCMC) methods typically requiring expert knowledge. Extensive numerical tests show the validity of the approach and we discuss implications of overdispersion for experimental design. A comprehensive MATLAB toolbox implementing the method is freely available; a python implementation providing the basic capabilities is also available. PMID:27013261

  8. An accurate and efficient Lagrangian sub-grid model for multi-particle dispersion

    NASA Astrophysics Data System (ADS)

    Toschi, Federico; Mazzitelli, Irene; Lanotte, Alessandra S.

    2014-11-01

    Many natural and industrial processes involve the dispersion of particle in turbulent flows. Despite recent theoretical progresses in the understanding of particle dynamics in simple turbulent flows, complex geometries often call for numerical approaches based on eulerian Large Eddy Simulation (LES). One important issue related to the Lagrangian integration of tracers in under-resolved velocity fields is connected to the lack of spatial correlations at unresolved scales. Here we propose a computationally efficient Lagrangian model for the sub-grid velocity of tracers dispersed in statistically homogeneous and isotropic turbulent flows. The model incorporates the multi-scale nature of turbulent temporal and spatial correlations that are essential to correctly reproduce the dynamics of multi-particle dispersion. The new model is able to describe the Lagrangian temporal and spatial correlations in clouds of particles. In particular we show that pairs and tetrads dispersion compare well with results from Direct Numerical Simulations of statistically isotropic and homogeneous 3d turbulence. This model may offer an accurate and efficient way to describe multi-particle dispersion in under resolved turbulent velocity fields such as the one employed in eulerian LES. This work is part of the research programmes FP112 of the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organisation for Scientific Research (NWO). We acknowledge support from the EU COST Action MP0806.

  9. A new class of accurate, mesh-free hydrodynamic simulation methods

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2015-06-01

    We present two new Lagrangian methods for hydrodynamics, in a systematic comparison with moving-mesh, smoothed particle hydrodynamics (SPH), and stationary (non-moving) grid methods. The new methods are designed to simultaneously capture advantages of both SPH and grid-based/adaptive mesh refinement (AMR) schemes. They are based on a kernel discretization of the volume coupled to a high-order matrix gradient estimator and a Riemann solver acting over the volume `overlap'. We implement and test a parallel, second-order version of the method with self-gravity and cosmological integration, in the code GIZMO:1 this maintains exact mass, energy and momentum conservation; exhibits superior angular momentum conservation compared to all other methods we study; does not require `artificial diffusion' terms; and allows the fluid elements to move with the flow, so resolution is automatically adaptive. We consider a large suite of test problems, and find that on all problems the new methods appear competitive with moving-mesh schemes, with some advantages (particularly in angular momentum conservation), at the cost of enhanced noise. The new methods have many advantages versus SPH: proper convergence, good capturing of fluid-mixing instabilities, dramatically reduced `particle noise' and numerical viscosity, more accurate sub-sonic flow evolution, and sharp shock-capturing. Advantages versus non-moving meshes include: automatic adaptivity, dramatically reduced advection errors and numerical overmixing, velocity-independent errors, accurate coupling to gravity, good angular momentum conservation and elimination of `grid alignment' effects. We can, for example, follow hundreds of orbits of gaseous discs, while AMR and SPH methods break down in a few orbits. However, fixed meshes minimize `grid noise'. These differences are important for a range of astrophysical problems.

  10. Numerical integration of second order differential equations

    NASA Technical Reports Server (NTRS)

    Shanks, E. B.

    1971-01-01

    Performance characteristics of higher order approximations of Runge-Kutta type are analyzed, and performance predictors for time required on machine and for error size are developed. Technique is useful in evaluating system performance, analyzing material characteristics, and designing inertial guidance and nuclear instrumentation and materials.

  11. Hydroforming Of Patchwork Blanks — Numerical Modeling And Experimental Validation

    NASA Astrophysics Data System (ADS)

    Lamprecht, Klaus; Merklein, Marion; Geiger, Manfred

    2005-08-01

    In comparison to the commonly applied technology of tailored blanks the concept of patchwork blanks offers a number of additional advantages. Potential application areas for patchwork blanks in automotive industry are e.g. local reinforcements of automotive closures, structural reinforcements of rails and pillars as well as shock towers. But even if there is a significant application potential for patchwork blanks in automobile production, industrial realization of this innovative technique is decelerated due to a lack of knowledge regarding the forming behavior and the numerical modeling of patchwork blanks. Especially for the numerical simulation of hydroforming processes, where one part of the forming tool is replaced by a fluid under pressure, advanced modeling techniques are required to ensure an accurate prediction of the blanks' forming behavior. The objective of this contribution is to provide an appropriate model for the numerical simulation of patchwork blanks' forming processes. Therefore, different finite element modeling techniques for patchwork blanks are presented. In addition to basic shell element models a combined finite element model consisting of shell and solid elements is defined. Special emphasis is placed on the modeling of the weld seam. For this purpose the local mechanical properties of the weld metal, which have been determined by means of Martens-hardness measurements and uniaxial tensile tests, are integrated in the finite element models. The results obtained from the numerical simulations are compared to experimental data from a hydraulic bulge test. In this context the focus is laid on laser- and spot-welded patchwork blanks.

  12. Benchmarking accurate spectral phase retrieval of single attosecond pulses

    NASA Astrophysics Data System (ADS)

    Wei, Hui; Le, Anh-Thu; Morishita, Toru; Yu, Chao; Lin, C. D.

    2015-02-01

    A single extreme-ultraviolet (XUV) attosecond pulse or pulse train in the time domain is fully characterized if its spectral amplitude and phase are both determined. The spectral amplitude can be easily obtained from photoionization of simple atoms where accurate photoionization cross sections have been measured from, e.g., synchrotron radiations. To determine the spectral phase, at present the standard method is to carry out XUV photoionization in the presence of a dressing infrared (IR) laser. In this work, we examine the accuracy of current phase retrieval methods (PROOF and iPROOF) where the dressing IR is relatively weak such that photoelectron spectra can be accurately calculated by second-order perturbation theory. We suggest a modified method named swPROOF (scattering wave phase retrieval by omega oscillation filtering) which utilizes accurate one-photon and two-photon dipole transition matrix elements and removes the approximations made in PROOF and iPROOF. We show that the swPROOF method can in general retrieve accurate spectral phase compared to other simpler models that have been suggested. We benchmark the accuracy of these phase retrieval methods through simulating the spectrogram by solving the time-dependent Schrödinger equation numerically using several known single attosecond pulses with a fixed spectral amplitude but different spectral phases.

  13. Some self starting integrators for x Prime equals f (x, t). [Runge-Kutta method and orbital position estimation

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1974-01-01

    The integration is discussed of the vector differential equation X = F(x, t) from time t sub i to t sub (i = 1) where only the values of x sub i are available for the the integration. No previous values of x or x prime are used. Using an orbit integration problem, comparisons are made between Taylor series integrators and various types and orders of Runge-Kutta integrators. A fourth order Runge-Kutta type integrator for orbital work is presented, and approximate (there may be no exact) fifth order Runge-Kutta integrators are discussed. Also discussed and compared is a self starting integrator ising delta f/delta x. A numerical method for controlling the accuracy of integration is given, and the special equations for accurately integrating accelerometer data are shown.

  14. Elliptic integrals: Symmetry and symbolic integration

    SciTech Connect

    Carlson, B.C. |

    1997-12-31

    Computation of elliptic integrals, whether numerical or symbolic, has been aided by the contributions of Italian mathematicians. Tricomi had a strong interest in iterative algorithms for computing elliptic integrals and other special functions, and his writings on elliptic functions and elliptic integrals have taught these subjects to many modern readers (including the author). The theory of elliptic integrals began with Fagnano`s duplication theorem, a generalization of which is now used iteratively for numerical computation in major software libraries. One of Lauricella`s multivariate hypergeometric functions has been found to contain all elliptic integrals as special cases and has led to the introduction of symmetric canonical forms. These forms provide major economies in new integral tables and offer a significant advantage also for symbolic integration of elliptic integrals. Although partly expository the present paper includes some new proofs and proposes a new procedure for symbolic integration.

  15. Detection and accurate localization of harmonic chipless tags

    NASA Astrophysics Data System (ADS)

    Dardari, Davide

    2015-12-01

    We investigate the detection and localization properties of harmonic tags working at microwave frequencies. A two-tone interrogation signal and a dedicated signal processing scheme at the receiver are proposed to eliminate phase ambiguities caused by the short signal wavelength and to provide accurate distance/position estimation even in the presence of clutter and multipath. The theoretical limits on tag detection and localization accuracy are investigated starting from a concise characterization of harmonic backscattered signals. Numerical results show that accuracies in the order of centimeters are feasible within an operational range of a few meters in the RFID UHF band.

  16. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.

    PubMed

    Fuchs, Franz G; Hjelmervik, Jon M

    2016-02-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results. PMID:26731454

  17. A Time-Accurate Upwind Unstructured Finite Volume Method for Compressible Flow with Cure of Pathological Behaviors

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; Jorgenson, Philip C. E.

    2007-01-01

    A time-accurate, upwind, finite volume method for computing compressible flows on unstructured grids is presented. The method is second order accurate in space and time and yields high resolution in the presence of discontinuities. For efficiency, the Roe approximate Riemann solver with an entropy correction is employed. In the basic Euler/Navier-Stokes scheme, many concepts of high order upwind schemes are adopted: the surface flux integrals are carefully treated, a Cauchy-Kowalewski time-stepping scheme is used in the time-marching stage, and a multidimensional limiter is applied in the reconstruction stage. However even with these up-to-date improvements, the basic upwind scheme is still plagued by the so-called "pathological behaviors," e.g., the carbuncle phenomenon, the expansion shock, etc. A solution to these limitations is presented which uses a very simple dissipation model while still preserving second order accuracy. This scheme is referred to as the enhanced time-accurate upwind (ETAU) scheme in this paper. The unstructured grid capability renders flexibility for use in complex geometry; and the present ETAU Euler/Navier-Stokes scheme is capable of handling a broad spectrum of flow regimes from high supersonic to subsonic at very low Mach number, appropriate for both CFD (computational fluid dynamics) and CAA (computational aeroacoustics). Numerous examples are included to demonstrate the robustness of the methods.

  18. Integrated Urban Dispersion Modeling Capability

    SciTech Connect

    Kosovic, B; Chan, S T

    2003-11-03

    Numerical simulations represent a unique predictive tool for developing a detailed understanding of three-dimensional flow fields and associated concentration distributions from releases in complex urban settings (Britter and Hanna 2003). The accurate and timely prediction of the atmospheric dispersion of hazardous materials in densely populated urban areas is a critical homeland and national security need for emergency preparedness, risk assessment, and vulnerability studies. The main challenges in high-fidelity numerical modeling of urban dispersion are the accurate prediction of peak concentrations, spatial extent and temporal evolution of harmful levels of hazardous materials, and the incorporation of detailed structural geometries. Current computational tools do not include all the necessary elements to accurately represent hazardous release events in complex urban settings embedded in high-resolution terrain. Nor do they possess the computational efficiency required for many emergency response and event reconstruction applications. We are developing a new integrated urban dispersion modeling capability, able to efficiently predict dispersion in diverse urban environments for a wide range of atmospheric conditions, temporal and spatial scales, and release event scenarios. This new computational fluid dynamics capability includes adaptive mesh refinement and it can simultaneously resolve individual buildings and high-resolution terrain (including important vegetative and land-use features), treat complex building and structural geometries (e.g., stadiums, arenas, subways, airplane interiors), and cope with the full range of atmospheric conditions (e.g. stability). We are developing approaches for seamless coupling with mesoscale numerical weather prediction models to provide realistic forcing of the urban-scale model, which is critical to its performance in real-world conditions.

  19. Implementation of equivalent domain integral method in the two-dimensional analysis of mixed mode problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Shivakumar, K. N.

    1989-01-01

    An equivalent domain integral (EDI) method for calculating J-intergrals for two-dimensional cracked elastic bodies is presented. The details of the method and its implementation are presented for isoparametric elements. The total and product integrals consist of the sum of an area of domain integral and line integrals on the crack faces. The line integrals vanish only when the crack faces are traction free and the loading is either pure mode 1 or pure mode 2 or a combination of both with only the square-root singular term in the stress field. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented. The procedure that uses the symmetric and antisymmetric components of the stress and displacement fields to calculate the individual modes gave accurate values of the integrals for all problems analyzed. The EDI method when applied to a problem of an interface crack in two different materials showed that the mode 1 and mode 2 components are domain dependent while the total integral is not. This behavior is caused by the presence of the oscillatory part of the singularity in bimaterial crack problems. The EDI method, thus, shows behavior similar to the virtual crack closure method for bimaterial problems.

  20. NUMERICAL SOLUTION FOR THE POTENTIAL AND DENSITY PROFILE OF A THERMAL EQUILIBRIUM SHEET BEAM

    SciTech Connect

    Bazouin, Steven M. Lund, Guillaume; Bazouin, Guillaume

    2011-04-01

    In a recent paper, S. M. Lund, A. Friedman, and G. Bazouin, Sheet beam model for intense space-charge: with application to Debye screening and the distribution of particle oscillation frequencies in a thermal equilibrium beam, in press, Phys. Rev. Special Topics - Accel. and Beams (2011), a 1D sheet beam model was extensively analyzed. In this complementary paper, we present details of a numerical procedure developed to construct the self-consistent electrostatic potential and density profile of a thermal equilibrium sheet beam distribution. This procedure effectively circumvents pathologies which can prevent use of standard numerical integration techniques when space-charge intensity is high. The procedure employs transformations and is straightforward to implement with standard numerical methods and produces accurate solutions which can be applied to thermal equilibria with arbitrarily strong space-charge intensity up to the applied focusing limit.

  1. NUMERICAL SOLUTION FOR THE POTENTIAL AND DENSITY PROFILE OF A THERMAL EQUILIBRIUM SHEET BEAM

    SciTech Connect

    Lund, S M; Bazouin, G

    2011-03-29

    In a recent paper, S. M. Lund, A. Friedman, and G. Bazouin, Sheet beam model for intense space-charge: with application to Debye screening and the distribution of particle oscillation frequencies in a thermal equilibrium beam, in press, Phys. Rev. Special Topics - Accel. and Beams (2011), a 1D sheet beam model was extensively analyzed. In this complementary paper, we present details of a numerical procedure developed to construct the self-consistent electrostatic potential and density profile of a thermal equilibrium sheet beam distribution. This procedure effectively circumvents pathologies which can prevent use of standard numerical integration techniques when space-charge intensity is high. The procedure employs transformations and is straightforward to implement with standard numerical methods and produces accurate solutions which can be applied to thermal equilibria with arbitrarily strong space-charge intensity up to the applied focusing limit.

  2. Investigation of the cold crucible melting process: experimental and numerical study

    NASA Astrophysics Data System (ADS)

    Bojarevics, V.; Djambazov, G.; Harding, R. A.; Pericleous, K.; Wickins, M.

    2003-12-01

    The dynamic process of melting different materials in a cold crucible is being studied experimentally with parallel numerical modelling work. The numerical simulation uses a variety of complementing models: finite volume, integral equation and pseudo-spectral methods combined to achieve the accurate description of the dynamic melting process. Results show a gradual development and change of the melting front, fluid velocities, magnetically confined liquid metal free surface, and the tempera-ture history during the whole melting process. The computed results are compared to the experimental temperature measurements and the heat losses in the various parts of the equipment. The free surface visual observations are compared to the numerically predicted surface shapes. Tables 2, Figs 5, Refs 8.

  3. Springback Simulation: Impact of Some Advanced Constitutive Models and Numerical Parameters

    NASA Astrophysics Data System (ADS)

    Haddag, Badis; Balan, Tudor; Abed-Meraim, Farid

    2005-08-01

    The impact of material models on the numerical simulation of springback is investigated. The study is focused on the strain-path sensitivity of two hardening models. While both models predict the Bauschinger effect, their response in the transient zone after a strain-path change is fairly different. Their respective predictions are compared in terms of sequential test response and of strip-drawing springback. For this purpose, an accurate and general time integration algorithm has been developed and implemented in the Abaqus code. The impact of several numerical parameters is also studied in order to assess the overall accuracy of the finite element prediction. For some test geometries, both material and numerical parameters are shown to clearly influence the springback behavior at a large extent. Moreover, a general trend cannot always be extracted, thus justifying the need for the finite element simulation of the stamping process.

  4. Numerical Simulation of a High Mach Number Jet Flow

    NASA Technical Reports Server (NTRS)

    Hayder, M. Ehtesham; Turkel, Eli; Mankbadi, Reda R.

    1993-01-01

    The recent efforts to develop accurate numerical schemes for transition and turbulent flows are motivated, among other factors, by the need for accurate prediction of flow noise. The success of developing high speed civil transport plane (HSCT) is contingent upon our understanding and suppression of the jet exhaust noise. The radiated sound can be directly obtained by solving the full (time-dependent) compressible Navier-Stokes equations. However, this requires computational storage that is beyond currently available machines. This difficulty can be overcome by limiting the solution domain to the near field where the jet is nonlinear and then use acoustic analogy (e.g., Lighthill) to relate the far-field noise to the near-field sources. The later requires obtaining the time-dependent flow field. The other difficulty in aeroacoustics computations is that at high Reynolds numbers the turbulent flow has a large range of scales. Direct numerical simulations (DNS) cannot obtain all the scales of motion at high Reynolds number of technological interest. However, it is believed that the large scale structure is more efficient than the small-scale structure in radiating noise. Thus, one can model the small scales and calculate the acoustically active scales. The large scale structure in the noise-producing initial region of the jet can be viewed as a wavelike nature, the net radiated sound is the net cancellation after integration over space. As such, aeroacoustics computations are highly sensitive to errors in computing the sound sources. It is therefore essential to use a high-order numerical scheme to predict the flow field. The present paper presents the first step in a ongoing effort to predict jet noise. The emphasis here is in accurate prediction of the unsteady flow field. We solve the full time-dependent Navier-Stokes equations by a high order finite difference method. Time accurate spatial simulations of both plane and axisymmetric jet are presented. Jet Mach

  5. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  6. Remote balance weighs accurately amid high radiation

    NASA Technical Reports Server (NTRS)

    Eggenberger, D. N.; Shuck, A. B.

    1969-01-01

    Commercial beam-type balance, modified and outfitted with electronic controls and digital readout, can be remotely controlled for use in high radiation environments. This allows accurate weighing of breeder-reactor fuel pieces when they are radioactively hot.

  7. Long-term dynamic modeling of tethered spacecraft using nodal position finite element method and symplectic integration

    NASA Astrophysics Data System (ADS)

    Li, G. Q.; Zhu, Z. H.

    2015-12-01

    Dynamic modeling of tethered spacecraft with the consideration of elasticity of tether is prone to the numerical instability and error accumulation over long-term numerical integration. This paper addresses the challenges by proposing a globally stable numerical approach with the nodal position finite element method (NPFEM) and the implicit, symplectic, 2-stage and 4th order Gaussian-Legendre Runge-Kutta time integration. The NPFEM eliminates the numerical error accumulation by using the position instead of displacement of tether as the state variable, while the symplectic integration enforces the energy and momentum conservation of the discretized finite element model to ensure the global stability of numerical solution. The effectiveness and robustness of the proposed approach is assessed by an elastic pendulum problem, whose dynamic response resembles that of tethered spacecraft, in comparison with the commonly used time integrators such as the classical 4th order Runge-Kutta schemes and other families of non-symplectic Runge-Kutta schemes. Numerical results show that the proposed approach is accurate and the energy of the corresponding numerical model is conservative over the long-term numerical integration. Finally, the proposed approach is applied to the dynamic modeling of deorbiting process of tethered spacecraft over a long period.

  8. Understanding the Code: keeping accurate records.

    PubMed

    Griffith, Richard

    2015-10-01

    In his continuing series looking at the legal and professional implications of the Nursing and Midwifery Council's revised Code of Conduct, Richard Griffith discusses the elements of accurate record keeping under Standard 10 of the Code. This article considers the importance of accurate record keeping for the safety of patients and protection of district nurses. The legal implications of records are explained along with how district nurses should write records to ensure these legal requirements are met. PMID:26418404

  9. Surface integral formulations for the design of plasmonic nanostructures.

    PubMed

    Forestiere, Carlo; Iadarola, Giovanni; Rubinacci, Guglielmo; Tamburrino, Antonello; Dal Negro, Luca; Miano, Giovanni

    2012-11-01

    Numerical formulations based on surface integral equations (SIEs) provide an accurate and efficient framework for the solution of the electromagnetic scattering problem by three-dimensional plasmonic nanostructures in the frequency domain. In this paper, we present a unified description of SIE formulations with both singular and nonsingular kernel and we study their accuracy in solving the scattering problem by metallic nanoparticles with spherical and nonspherical shape. In fact, the accuracy of the numerical solution, especially in the near zone, is of great importance in the analysis and design of plasmonic nanostructures, whose operation critically depends on the manipulation of electromagnetic hot spots. Four formulation types are considered: the N-combined region integral equations, the T-combined region integral equations, the combined field integral equations and the null field integral equations. A detailed comparison between their numerical solutions obtained for several nanoparticle shapes is performed by examining convergence rate and accuracy in both the far and near zone of the scatterer as a function of the number of degrees of freedom. A rigorous analysis of SIE formulations and their limitations can have a high impact on the engineering of numerous nano-scale optical devices such as plasmon-enhanced light emitters, biosensors, photodetectors, and nanoantennas. PMID:23201792

  10. Surface Integral Formulations for the Design of Plasmonic Nanostructures

    NASA Astrophysics Data System (ADS)

    Forestiere, Carlo; Iadarola, Giovanni; Rubinacci, Guglielmo; Tamburrino, Antonello; Dal Negro, Luca; Miano, Giovanni; Boston University Team; Universita'degli Studi di Napoli Federico Team, II; Universita'di Cassino e del Lazio Meridionale Team

    2013-03-01

    Numerical formulations based on surface integral equations (SIEs) provide an accurate and efficient framework for the solution of the electromagnetic scattering problem by three-dimensional plasmonic nanostructures in the frequency domain. In this work, we present a unified description of SIE formulations with both singular and nonsingular kernel and we study their accuracy in solving the scattering problem by metallic nanoparticles with spherical and nonspherical shape. In fact, the accuracy of the numerical solution, especially in the near zone, is of great importance in the analysis and design of plasmonic nanostructures, whose operation critically depends on the manipulation of electromagnetic hot spots. Four formulation types are considered: the N-combined region integral equations, the T-combined region integral equations, the combined field integral equations and the null field integral equations. A detailed comparison between their numerical solutions obtained for several nanoparticle shapes is performed by examining convergence rate and accuracy in both the far and near zone of the scatterer as a function of the number of degrees of freedom. A rigorous analysis of SIE formulations can have a high impact on the engineering of numerous nano-scale optical devices.

  11. FRACTIONAL INTEGRATION TOOLBOX

    PubMed Central

    Marinov, Toma M.; Ramirez, Nelson; Santamaria, Fidel

    2014-01-01

    The problems formulated in the fractional calculus framework often require numerical fractional integration/differentiation of large data sets. Several existing fractional control toolboxes are capable of performing fractional calculus operations, however, none of them can efficiently perform numerical integration on multiple large data sequences. We developed a Fractional Integration Toolbox (FIT), which efficiently performs fractional numerical integration/differentiation of the Riemann-Liouville type on large data sequences. The toolbox allows parallelization and is designed to be deployed on both CPU and GPU platforms. PMID:24812536

  12. Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping

    NASA Astrophysics Data System (ADS)

    Rehak, M.; Skaloud, J.

    2015-08-01

    In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  13. Groundtruth approach to accurate quantitation of fluorescence microarrays

    SciTech Connect

    Mascio-Kegelmeyer, L; Tomascik-Cheeseman, L; Burnett, M S; van Hummelen, P; Wyrobek, A J

    2000-12-01

    To more accurately measure fluorescent signals from microarrays, we calibrated our acquisition and analysis systems by using groundtruth samples comprised of known quantities of red and green gene-specific DNA probes hybridized to cDNA targets. We imaged the slides with a full-field, white light CCD imager and analyzed them with our custom analysis software. Here we compare, for multiple genes, results obtained with and without preprocessing (alignment, color crosstalk compensation, dark field subtraction, and integration time). We also evaluate the accuracy of various image processing and analysis techniques (background subtraction, segmentation, quantitation and normalization). This methodology calibrates and validates our system for accurate quantitative measurement of microarrays. Specifically, we show that preprocessing the images produces results significantly closer to the known ground-truth for these samples.

  14. Robust ODF smoothing for accurate estimation of fiber orientation.

    PubMed

    Beladi, Somaieh; Pathirana, Pubudu N; Brotchie, Peter

    2010-01-01

    Q-ball imaging was presented as a model free, linear and multimodal diffusion sensitive approach to reconstruct diffusion orientation distribution function (ODF) using diffusion weighted MRI data. The ODFs are widely used to estimate the fiber orientations. However, the smoothness constraint was proposed to achieve a balance between the angular resolution and noise stability for ODF constructs. Different regularization methods were proposed for this purpose. However, these methods are not robust and quite sensitive to the global regularization parameter. Although, numerical methods such as L-curve test are used to define a globally appropriate regularization parameter, it cannot serve as a universal value suitable for all regions of interest. This may result in over smoothing and potentially end up in neglecting an existing fiber population. In this paper, we propose to include an interpolation step prior to the spherical harmonic decomposition. This interpolation based approach is based on Delaunay triangulation provides a reliable, robust and accurate smoothing approach. This method is easy to implement and does not require other numerical methods to define the required parameters. Also, the fiber orientations estimated using this approach are more accurate compared to other common approaches. PMID:21096202

  15. Numerical Stimulation of Multicomponent Chromatography Using Spreadsheets.

    ERIC Educational Resources Information Center

    Frey, Douglas D.

    1990-01-01

    Illustrated is the use of spreadsheet programs for implementing finite difference numerical simulations of chromatography as an instructional tool in a separations course. Discussed are differential equations, discretization and integration, spreadsheet development, computer requirements, and typical simulation results. (CW)

  16. Numerical Propulsion System Simulation

    NASA Technical Reports Server (NTRS)

    Naiman, Cynthia

    2006-01-01

    The NASA Glenn Research Center, in partnership with the aerospace industry, other government agencies, and academia, is leading the effort to develop an advanced multidisciplinary analysis environment for aerospace propulsion systems called the Numerical Propulsion System Simulation (NPSS). NPSS is a framework for performing analysis of complex systems. The initial development of NPSS focused on the analysis and design of airbreathing aircraft engines, but the resulting NPSS framework may be applied to any system, for example: aerospace, rockets, hypersonics, power and propulsion, fuel cells, ground based power, and even human system modeling. NPSS provides increased flexibility for the user, which reduces the total development time and cost. It is currently being extended to support the NASA Aeronautics Research Mission Directorate Fundamental Aeronautics Program and the Advanced Virtual Engine Test Cell (AVETeC). NPSS focuses on the integration of multiple disciplines such as aerodynamics, structure, and heat transfer with numerical zooming on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS development includes capabilities to facilitate collaborative engineering. The NPSS will provide improved tools to develop custom components and to use capability for zooming to higher fidelity codes, coupling to multidiscipline codes, transmitting secure data, and distributing simulations across different platforms. These powerful capabilities extend NPSS from a zero-dimensional simulation tool to a multi-fidelity, multidiscipline system-level simulation tool for the full development life cycle.

  17. Zdeněk Kopal: Numerical Analyst

    NASA Astrophysics Data System (ADS)

    Křížek, M.

    2015-07-01

    We give a brief overview of Zdeněk Kopal's life, his activities in the Czech Astronomical Society, his collaboration with Vladimír Vand, and his studies at Charles University, Cambridge, Harvard, and MIT. Then we survey Kopal's professional life. He published 26 monographs and 20 conference proceedings. We will concentrate on Kopal's extensive monograph Numerical Analysis (1955, 1961) that is widely accepted to be the first comprehensive textbook on numerical methods. It describes, for instance, methods for polynomial interpolation, numerical differentiation and integration, numerical solution of ordinary differential equations with initial or boundary conditions, and numerical solution of integral and integro-differential equations. Special emphasis will be laid on error analysis. Kopal himself applied numerical methods to celestial mechanics, in particular to the N-body problem. He also used Fourier analysis to investigate light curves of close binaries to discover their properties. This is, in fact, a problem from mathematical analysis.

  18. Accurate thermoelastic tensor and acoustic velocities of NaCl

    NASA Astrophysics Data System (ADS)

    Marcondes, Michel L.; Shukla, Gaurav; da Silveira, Pedro; Wentzcovitch, Renata M.

    2015-12-01

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  19. Efficient and accurate computation of generalized singular-value decompositions

    NASA Astrophysics Data System (ADS)

    Drmac, Zlatko

    2001-11-01

    We present a new family of algorithms for accurate floating--point computation of the singular value decomposition (SVD) of various forms of products (quotients) of two or three matrices. The main goal of such an algorithm is to compute all singular values to high relative accuracy. This means that we are seeking guaranteed number of accurate digits even in the smallest singular values. We also want to achieve computational efficiency, while maintaining high accuracy. To illustrate, consider the SVD of the product A=BTSC. The new algorithm uses certain preconditioning (based on diagonal scalings, the LU and QR factorizations) to replace A with A'=(B')TS'C', where A and A' have the same singular values and the matrix A' is computed explicitly. Theoretical analysis and numerical evidence show that, in the case of full rank B, C, S, the accuracy of the new algorithm is unaffected by replacing B, S, C with, respectively, D1B, D2SD3, D4C, where Di, i=1,...,4 are arbitrary diagonal matrices. As an application, the paper proposes new accurate algorithms for computing the (H,K)-SVD and (H1,K)-SVD of S.

  20. Accurate thermoelastic tensor and acoustic velocities of NaCl

    SciTech Connect

    Marcondes, Michel L.; Shukla, Gaurav; Silveira, Pedro da; Wentzcovitch, Renata M.

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.