Science.gov

Sample records for accurate numerical integration

  1. Numerical Integration

    ERIC Educational Resources Information Center

    Sozio, Gerry

    2009-01-01

    Senior secondary students cover numerical integration techniques in their mathematics courses. In particular, students would be familiar with the "midpoint rule," the elementary "trapezoidal rule" and "Simpson's rule." This article derives these techniques by methods which secondary students may not be familiar with and an approach that…

  2. On numerically accurate finite element

    NASA Technical Reports Server (NTRS)

    Nagtegaal, J. C.; Parks, D. M.; Rice, J. R.

    1974-01-01

    A general criterion for testing a mesh with topologically similar repeat units is given, and the analysis shows that only a few conventional element types and arrangements are, or can be made suitable for computations in the fully plastic range. Further, a new variational principle, which can easily and simply be incorporated into an existing finite element program, is presented. This allows accurate computations to be made even for element designs that would not normally be suitable. Numerical results are given for three plane strain problems, namely pure bending of a beam, a thick-walled tube under pressure, and a deep double edge cracked tensile specimen. The effects of various element designs and of the new variational procedure are illustrated. Elastic-plastic computation at finite strain are discussed.

  3. Accurate Evaluation of Quantum Integrals

    NASA Technical Reports Server (NTRS)

    Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)

    1995-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  4. Higher order accurate partial implicitization: An unconditionally stable fourth-order-accurate explicit numerical technique

    NASA Technical Reports Server (NTRS)

    Graves, R. A., Jr.

    1975-01-01

    The previously obtained second-order-accurate partial implicitization numerical technique used in the solution of fluid dynamic problems was modified with little complication to achieve fourth-order accuracy. The Von Neumann stability analysis demonstrated the unconditional linear stability of the technique. The order of the truncation error was deduced from the Taylor series expansions of the linearized difference equations and was verified by numerical solutions to Burger's equation. For comparison, results were also obtained for Burger's equation using a second-order-accurate partial-implicitization scheme, as well as the fourth-order scheme of Kreiss.

  5. Time-Accurate Numerical Simulations of Synthetic Jet Quiescent Air

    NASA Technical Reports Server (NTRS)

    Rupesh, K-A. B.; Ravi, B. R.; Mittal, R.; Raju, R.; Gallas, Q.; Cattafesta, L.

    2007-01-01

    The unsteady evolution of three-dimensional synthetic jet into quiescent air is studied by time-accurate numerical simulations using a second-order accurate mixed explicit-implicit fractional step scheme on Cartesian grids. Both two-dimensional and three-dimensional calculations of synthetic jet are carried out at a Reynolds number (based on average velocity during the discharge phase of the cycle V(sub j), and jet width d) of 750 and Stokes number of 17.02. The results obtained are assessed against PIV and hotwire measurements provided for the NASA LaRC workshop on CFD validation of synthetic jets.

  6. Can numerical simulations accurately predict hydrodynamic instabilities in liquid films?

    NASA Astrophysics Data System (ADS)

    Denner, Fabian; Charogiannis, Alexandros; Pradas, Marc; van Wachem, Berend G. M.; Markides, Christos N.; Kalliadasis, Serafim

    2014-11-01

    Understanding the dynamics of hydrodynamic instabilities in liquid film flows is an active field of research in fluid dynamics and non-linear science in general. Numerical simulations offer a powerful tool to study hydrodynamic instabilities in film flows and can provide deep insights into the underlying physical phenomena. However, the direct comparison of numerical results and experimental results is often hampered by several reasons. For instance, in numerical simulations the interface representation is problematic and the governing equations and boundary conditions may be oversimplified, whereas in experiments it is often difficult to extract accurate information on the fluid and its behavior, e.g. determine the fluid properties when the liquid contains particles for PIV measurements. In this contribution we present the latest results of our on-going, extensive study on hydrodynamic instabilities in liquid film flows, which includes direct numerical simulations, low-dimensional modelling as well as experiments. The major focus is on wave regimes, wave height and wave celerity as a function of Reynolds number and forcing frequency of a falling liquid film. Specific attention is paid to the differences in numerical and experimental results and the reasons for these differences. The authors are grateful to the EPSRC for their financial support (Grant EP/K008595/1).

  7. Cuba: Multidimensional numerical integration library

    NASA Astrophysics Data System (ADS)

    Hahn, Thomas

    2016-08-01

    The Cuba library offers four independent routines for multidimensional numerical integration: Vegas, Suave, Divonne, and Cuhre. The four algorithms work by very different methods, and can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Their invocation is very similar, making it easy to cross-check by substituting one method by another. For further safeguarding, the output is supplemented by a chi-square probability which quantifies the reliability of the error estimate.

  8. Accurate spectral numerical schemes for kinetic equations with energy diffusion

    NASA Astrophysics Data System (ADS)

    Wilkening, Jon; Cerfon, Antoine J.; Landreman, Matt

    2015-08-01

    We examine the merits of using a family of polynomials that are orthogonal with respect to a non-classical weight function to discretize the speed variable in continuum kinetic calculations. We consider a model one-dimensional partial differential equation describing energy diffusion in velocity space due to Fokker-Planck collisions. This relatively simple case allows us to compare the results of the projected dynamics with an expensive but highly accurate spectral transform approach. It also allows us to integrate in time exactly, and to focus entirely on the effectiveness of the discretization of the speed variable. We show that for a fixed number of modes or grid points, the non-classical polynomials can be many orders of magnitude more accurate than classical Hermite polynomials or finite-difference solvers for kinetic equations in plasma physics. We provide a detailed analysis of the difference in behavior and accuracy of the two families of polynomials. For the non-classical polynomials, if the initial condition is not smooth at the origin when interpreted as a three-dimensional radial function, the exact solution leaves the polynomial subspace for a time, but returns (up to roundoff accuracy) to the same point evolved to by the projected dynamics in that time. By contrast, using classical polynomials, the exact solution differs significantly from the projected dynamics solution when it returns to the subspace. We also explore the connection between eigenfunctions of the projected evolution operator and (non-normalizable) eigenfunctions of the full evolution operator, as well as the effect of truncating the computational domain.

  9. Numerical integration of diffraction integrals for a circular aperture

    NASA Astrophysics Data System (ADS)

    Cooper, I. J.; Sheppard, C. J. R.; Sharma, M.

    It is possible to obtain an accurate irradiance distribution for the diffracted wave field from an aperture by the numerical evaluation of the two-dimensional diffraction integrals using a product-integration method in which Simpson's 1/3 rule is applied twice. The calculations can be done quickly using a standard PC by utilizing matrix operations on complex numbers with Matlab. The diffracted wave field can be calculated from the plane of the aperture to the far field without introducing many of the standard approximations that are used to give Fresnel or Fraunhofer diffraction. The numerical method is used to compare the diffracted irradiance distribution from a circular aperture as predicted by Kirchhoff, Rayleigh-Sommerfeld 1 and Rayleigh-Sommerfeld 2 diffraction integrals.

  10. Efficient numerical evaluation of Feynman integrals

    NASA Astrophysics Data System (ADS)

    Li, Zhao; Wang, Jian; Yan, Qi-Shu; Zhao, Xiaoran

    2016-03-01

    Feynman loop integrals are a key ingredient for the calculation of higher order radiation effects, and are responsible for reliable and accurate theoretical prediction. We improve the efficiency of numerical integration in sector decomposition by implementing a quasi-Monte Carlo method associated with the CUDA/GPU technique. For demonstration we present the results of several Feynman integrals up to two loops in both Euclidean and physical kinematic regions in comparison with those obtained from FIESTA3. It is shown that both planar and non-planar two-loop master integrals in the physical kinematic region can be evaluated in less than half a minute with accuracy, which makes the direct numerical approach viable for precise investigation of higher order effects in multi-loop processes, e.g. the next-to-leading order QCD effect in Higgs pair production via gluon fusion with a finite top quark mass. Supported by the Natural Science Foundation of China (11305179 11475180), Youth Innovation Promotion Association, CAS, IHEP Innovation (Y4545170Y2), State Key Lab for Electronics and Particle Detectors, Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y4KF061CJ1), Cluster of Excellence Precision Physics, Fundamental Interactions and Structure of Matter (PRISMA-EXC 1098)

  11. Accurate simulation of transient landscape evolution by eliminating numerical diffusion: the TTLEM 1.0 model

    NASA Astrophysics Data System (ADS)

    Campforts, Benjamin; Schwanghart, Wolfgang; Govers, Gerard

    2017-01-01

    Landscape evolution models (LEMs) allow the study of earth surface responses to changing climatic and tectonic forcings. While much effort has been devoted to the development of LEMs that simulate a wide range of processes, the numerical accuracy of these models has received less attention. Most LEMs use first-order accurate numerical methods that suffer from substantial numerical diffusion. Numerical diffusion particularly affects the solution of the advection equation and thus the simulation of retreating landforms such as cliffs and river knickpoints. This has potential consequences for the integrated response of the simulated landscape. Here we test a higher-order flux-limiting finite volume method that is total variation diminishing (TVD-FVM) to solve the partial differential equations of river incision and tectonic displacement. We show that using the TVD-FVM to simulate river incision significantly influences the evolution of simulated landscapes and the spatial and temporal variability of catchment-wide erosion rates. Furthermore, a two-dimensional TVD-FVM accurately simulates the evolution of landscapes affected by lateral tectonic displacement, a process whose simulation was hitherto largely limited to LEMs with flexible spatial discretization. We implement the scheme in TTLEM (TopoToolbox Landscape Evolution Model), a spatially explicit, raster-based LEM for the study of fluvially eroding landscapes in TopoToolbox 2.

  12. Fast and Accurate Learning When Making Discrete Numerical Estimates

    PubMed Central

    Sanborn, Adam N.; Beierholm, Ulrik R.

    2016-01-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  13. Accurate numerical simulation of short fiber optical parametric amplifiers.

    PubMed

    Marhic, M E; Rieznik, A A; Kalogerakis, G; Braimiotis, C; Fragnito, H L; Kazovsky, L G

    2008-03-17

    We improve the accuracy of numerical simulations for short fiber optical parametric amplifiers (OPAs). Instead of using the usual coarse-step method, we adopt a model for birefringence and dispersion which uses fine-step variations of the parameters. We also improve the split-step Fourier method by exactly treating the nonlinear ellipse rotation terms. We find that results obtained this way for two-pump OPAs can be significantly different from those obtained by using the usual coarse-step fiber model, and/or neglecting ellipse rotation terms.

  14. Accurate numerical solutions for elastic-plastic models. [LMFBR

    SciTech Connect

    Schreyer, H. L.; Kulak, R. F.; Kramer, J. M.

    1980-03-01

    The accuracy of two integration algorithms is studied for the common engineering condition of a von Mises, isotropic hardening model under plane stress. Errors in stress predictions for given total strain increments are expressed with contour plots of two parameters: an angle in the pi plane and the difference between the exact and computed yield-surface radii. The two methods are the tangent-predictor/radial-return approach and the elastic-predictor/radial-corrector algorithm originally developed by Mendelson. The accuracy of a combined tangent-predictor/radial-corrector algorithm is also investigated.

  15. Second-Order Accurate Projective Integrators for Multiscale Problems

    SciTech Connect

    Lee, S L; Gear, C W

    2005-05-27

    We introduce new projective versions of second-order accurate Runge-Kutta and Adams-Bashforth methods, and demonstrate their use as outer integrators in solving stiff differential systems. An important outcome is that the new outer integrators, when combined with an inner telescopic projective integrator, can result in fully explicit methods with adaptive outer step size selection and solution accuracy comparable to those obtained by implicit integrators. If the stiff differential equations are not directly available, our formulations and stability analysis are general enough to allow the combined outer-inner projective integrators to be applied to black-box legacy codes or perform a coarse-grained time integration of microscopic systems to evolve macroscopic behavior, for example.

  16. An accurate solution of elastodynamic problems by numerical local Green's functions

    NASA Astrophysics Data System (ADS)

    Loureiro, F. S.; Silva, J. E. A.; Mansur, W. J.

    2015-09-01

    Green's function based methodologies for elastodynamics in both time and frequency domains, which can be either numerical or analytical, appear in many branches of physics and engineering. Thus, the development of exact expressions for Green's functions is of great importance. Unfortunately, such expressions are known only for relatively few kinds of geometry, medium and boundary conditions. In this way, due to the difficulty in finding exact Green's functions, specially in the time domain, the present paper presents a solution of the transient elastodynamic equations by a time-stepping technique based on the Explicit Green's Approach method written in terms of the Green's and Step response functions, both being computed numerically by the finite element method. The major feature is the computation of these functions separately by the central difference time integration scheme and locally owing to the principle of causality. More precisely, Green's functions are computed only at t = Δt adopting two time substeps while Step response functions are computed directly without substeps. The proposed time-stepping method shows to be quite accurate with distinct numerical properties not presented in the standard central difference scheme as addressed in the numerical example.

  17. Numerical Integration: One Step at a Time

    ERIC Educational Resources Information Center

    Yang, Yajun; Gordon, Sheldon P.

    2016-01-01

    This article looks at the effects that adding a single extra subdivision has on the level of accuracy of some common numerical integration routines. Instead of automatically doubling the number of subdivisions for a numerical integration rule, we investigate what happens with a systematic method of judiciously selecting one extra subdivision for…

  18. Numerical integration using Wang Landau sampling

    NASA Astrophysics Data System (ADS)

    Li, Y. W.; Wüst, T.; Landau, D. P.; Lin, H. Q.

    2007-09-01

    We report a new application of Wang-Landau sampling to numerical integration that is straightforward to implement. It is applicable to a wide variety of integrals without restrictions and is readily generalized to higher-dimensional problems. The feasibility of the method results from a reinterpretation of the density of states in statistical physics to an appropriate measure for numerical integration. The properties of this algorithm as a new kind of Monte Carlo integration scheme are investigated with some simple integrals, and a potential application of the method is illustrated by the evaluation of integrals arising in perturbation theory of quantum many-body systems.

  19. An Integrative Theory of Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert; Lortie-Forgues, Hugues

    2014-01-01

    Understanding of numerical development is growing rapidly, but the volume and diversity of findings can make it difficult to perceive any coherence in the process. The integrative theory of numerical development posits that a coherent theme is present, however--progressive broadening of the set of numbers whose magnitudes can be accurately…

  20. Orientation of the earth by numerical integration

    NASA Technical Reports Server (NTRS)

    Fajemirokun, F. A.; Hotter, F. D.; Mueller, I. I.

    1976-01-01

    A fundamental problem is the determination of the orientation of the earth in the celestial coordinate system. Classical reductions for precession and nutation can be expected to be consistent with the present-day observations, however, corrections to the classical theory are difficult to model because of the large number of coefficients involved. Consequently, a portion of the research has been devoted to numerically integrating the Eulerian equations of motion for a rigid earth and considering the six initial conditions of the integration as unknowns. Comparison of the three adjusted Eulerian angles from the numerical integration over 1000 days indicates agreement with classical theory to within 0.003 seconds of arc.

  1. Orbital Advection by Interpolation: A Fast and Accurate Numerical Scheme for Super-Fast MHD Flows

    SciTech Connect

    Johnson, B M; Guan, X; Gammie, F

    2008-04-11

    In numerical models of thin astrophysical disks that use an Eulerian scheme, gas orbits supersonically through a fixed grid. As a result the timestep is sharply limited by the Courant condition. Also, because the mean flow speed with respect to the grid varies with position, the truncation error varies systematically with position. For hydrodynamic (unmagnetized) disks an algorithm called FARGO has been developed that advects the gas along its mean orbit using a separate interpolation substep. This relaxes the constraint imposed by the Courant condition, which now depends only on the peculiar velocity of the gas, and results in a truncation error that is more nearly independent of position. This paper describes a FARGO-like algorithm suitable for evolving magnetized disks. Our method is second order accurate on a smooth flow and preserves {del} {center_dot} B = 0 to machine precision. The main restriction is that B must be discretized on a staggered mesh. We give a detailed description of an implementation of the code and demonstrate that it produces the expected results on linear and nonlinear problems. We also point out how the scheme might be generalized to make the integration of other supersonic/super-fast flows more efficient. Although our scheme reduces the variation of truncation error with position, it does not eliminate it. We show that the residual position dependence leads to characteristic radial variations in the density over long integrations.

  2. Fibonacci numerical integration on a sphere

    NASA Astrophysics Data System (ADS)

    Hannay, J. H.; Nye, J. F.

    2004-12-01

    For elementary numerical integration on a sphere, there is a distinct advantage in using an oblique array of integration sampling points based on a chosen pair of successive Fibonacci numbers. The pattern has a familiar appearance of intersecting spirals, avoiding the local anisotropy of a conventional latitude longitude array. Besides the oblique Fibonacci array, the prescription we give is also based on a non-uniform scaling used for one-dimensional numerical integration, and indeed achieves the same order of accuracy as for one dimension: error ~N-6 for N points. This benefit of Fibonacci is not shared by domains of integration with boundaries (e.g., a square, for which it was originally proposed); with non-uniform scaling the error goes as N-3, with or without Fibonacci. For experimental measurements over a sphere our prescription is realized by a non-uniform Fibonacci array of weighted sampling points.

  3. Automatic numerical integration methods for Feynman integrals through 3-loop

    NASA Astrophysics Data System (ADS)

    de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Olagbemi, O.

    2015-05-01

    We give numerical integration results for Feynman loop diagrams through 3-loop such as those covered by Laporta [1]. The methods are based on automatic adaptive integration, using iterated integration and extrapolation with programs from the QUADPACK package, or multivariate techniques from the ParInt package. The Dqags algorithm from QuadPack accommodates boundary singularities of fairly general types. PARINT is a package for multivariate integration layered over MPI (Message Passing Interface), which runs on clusters and incorporates advanced parallel/distributed techniques such as load balancing among processes that may be distributed over a network of nodes. Results are included for 3-loop self-energy diagrams without IR (infra-red) or UV (ultra-violet) singularities. A procedure based on iterated integration and extrapolation yields a novel method of numerical regularization for integrals with UV terms, and is applied to a set of 2-loop self-energy diagrams with UV singularities.

  4. Highly Parallel, High-Precision Numerical Integration

    SciTech Connect

    Bailey, David H.; Borwein, Jonathan M.

    2005-04-22

    This paper describes a scheme for rapidly computing numerical values of definite integrals to very high accuracy, ranging from ordinary machine precision to hundreds or thousands of digits, even for functions with singularities or infinite derivatives at endpoints. Such a scheme is of interest not only in computational physics and computational chemistry, but also in experimental mathematics, where high-precision numerical values of definite integrals can be used to numerically discover new identities. This paper discusses techniques for a parallel implementation of this scheme, then presents performance results for 1-D and 2-D test suites. Results are also given for a certain problem from mathematical physics, which features a difficult singularity, confirming a conjecture to 20,000 digit accuracy. The performance rate for this latter calculation on 1024 CPUs is 690 Gflop/s. We believe that this and one other 20,000-digit integral evaluation that we report are the highest-precision non-trivial numerical integrations performed to date.

  5. On an efficient and accurate method to integrate restricted three-body orbits

    NASA Technical Reports Server (NTRS)

    Murison, Marc A.

    1989-01-01

    This work is a quantitative analysis of the advantages of the Bulirsch-Stoer (1966) method, demonstrating that this method is certainly worth considering when working with small N dynamical systems. The results, qualitatively suspected by many users, are quantitatively confirmed as follows: (1) the Bulirsch-Stoer extrapolation method is very fast and moderately accurate; (2) regularization of the equations of motion stabilizes the error behavior of the method and is, of course, essential during close approaches; and (3) when applicable, a manifold-correction algorithm reduces numerical errors to the limits of machine accuracy. In addition, for the specific case of the restricted three-body problem, even a small eccentricity for the orbit of the primaries drastically affects the accuracy of integrations, whether regularized or not; the circular restricted problem integrates much more accurately.

  6. Numerical integration routines for near-earth operations

    NASA Technical Reports Server (NTRS)

    Powers, W. F.

    1973-01-01

    Two general purpose numerical integration schemes were built into the NASA-JSC computer system. The state-of-the-art of numerical integration, the particular integrators built into the JSC computer system, and the use of the new integration packages are described. Background information about numerical integration and the variable-order, variable-stepsize Adams numerical integration technique is discussed. Results concerning the PEACE parameter optimization program are given along with recommendations and conclusions.

  7. Numerical integration for ab initio many-electron self energy calculations within the GW approximation

    SciTech Connect

    Liu, Fang; Lin, Lin; Vigil-Fowler, Derek; Lischner, Johannes; Kemper, Alexander F.; Sharifzadeh, Sahar; Jornada, Felipe H. da; Deslippe, Jack; Yang, Chao; and others

    2015-04-01

    We present a numerical integration scheme for evaluating the convolution of a Green's function with a screened Coulomb potential on the real axis in the GW approximation of the self energy. Our scheme takes the zero broadening limit in Green's function first, replaces the numerator of the integrand with a piecewise polynomial approximation, and performs principal value integration on subintervals analytically. We give the error bound of our numerical integration scheme and show by numerical examples that it is more reliable and accurate than the standard quadrature rules such as the composite trapezoidal rule. We also discuss the benefit of using different self energy expressions to perform the numerical convolution at different frequencies.

  8. Numerical Integral of Resistance Coefficients in Diffusion

    NASA Astrophysics Data System (ADS)

    Zhang, Q. S.

    2017-01-01

    The resistance coefficients in the screened Coulomb potential of stellar plasma are evaluated to high accuracy. I have analyzed the possible singularities in the integral of scattering angle. There are possible singularities in the case of an attractive potential. This may result in a problem for the numerical integral. In order to avoid the problem, I have used a proper scheme, e.g., splitting into many subintervals where the width of each subinterval is determined by the variation of the integrand, to calculate the scattering angle. The collision integrals are calculated by using Romberg’s method, therefore the accuracy is high (i.e., ∼10‑12). The results of collision integrals and their derivatives for ‑7 ≤ ψ ≤ 5 are listed. By using Hermite polynomial interpolation from those data, the collision integrals can be obtained with an accuracy of 10‑10. For very weakly coupled plasma (ψ ≥ 4.5), analytical fittings for collision integrals are available with an accuracy of 10‑11. I have compared the final results of resistance coefficients with other works and found that, for a repulsive potential, the results are basically the same as others’ for an attractive potential, the results in cases of intermediate and strong coupling show significant differences. The resulting resistance coefficients are tested in the solar model. Comparing with the widely used models of Cox et al. and Thoul et al., the resistance coefficients in the screened Coulomb potential lead to a slightly weaker effect in the solar model, which is contrary to the expectation of attempts to solve the solar abundance problem.

  9. Recommendations for Achieving Accurate Numerical Simulation of Tip Clearance Flows in Transonic Compressor Rotors

    NASA Technical Reports Server (NTRS)

    VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R,; Hathaway, Michael D.; Okiishi, Theodore H.

    2000-01-01

    The tip clearance flows of transonic compressor rotors are important because they have a significant impact on rotor and stage performance. While numerical simulations of these flows are quite sophisticated. they are seldom verified through rigorous comparisons of numerical and measured data because these kinds of measurements are rare in the detail necessary to be useful in high-speed machines. In this paper we compare measured tip clearance flow details (e.g. trajectory and radial extent) with corresponding data obtained from a numerical simulation. Recommendations for achieving accurate numerical simulation of tip clearance flows are presented based on this comparison. Laser Doppler Velocimeter (LDV) measurements acquired in a transonic compressor rotor, NASA Rotor 35, are used. The tip clearance flow field of this transonic rotor was simulated using a Navier-Stokes turbomachinery solver that incorporates an advanced k-epsilon turbulence model derived for flows that are not in local equilibrium. Comparison between measured and simulated results indicates that simulation accuracy is primarily dependent upon the ability of the numerical code to resolve important details of a wall-bounded shear layer formed by the relative motion between the over-tip leakage flow and the shroud wall. A simple method is presented for determining the strength of this shear layer.

  10. Final Report for "Accurate Numerical Models of the Secondary Electron Yield from Grazing-incidence Collisions".

    SciTech Connect

    Seth A Veitzer

    2008-10-21

    Effects of stray electrons are a main factor limiting performance of many accelerators. Because heavy-ion fusion (HIF) accelerators will operate in regimes of higher current and with walls much closer to the beam than accelerators operating today, stray electrons might have a large, detrimental effect on the performance of an HIF accelerator. A primary source of stray electrons is electrons generated when halo ions strike the beam pipe walls. There is some research on these types of secondary electrons for the HIF community to draw upon, but this work is missing one crucial ingredient: the effect of grazing incidence. The overall goal of this project was to develop the numerical tools necessary to accurately model the effect of grazing incidence on the behavior of halo ions in a HIF accelerator, and further, to provide accurate models of heavy ion stopping powers with applications to ICF, WDM, and HEDP experiments.

  11. Towards numerically accurate many-body perturbation theory: Short-range correlation effects

    SciTech Connect

    Gulans, Andris

    2014-10-28

    The example of the uniform electron gas is used for showing that the short-range electron correlation is difficult to handle numerically, while it noticeably contributes to the self-energy. Nonetheless, in condensed-matter applications studied with advanced methods, such as the GW and random-phase approximations, it is common to neglect contributions due to high-momentum (large q) transfers. Then, the short-range correlation is poorly described, which leads to inaccurate correlation energies and quasiparticle spectra. To circumvent this problem, an accurate extrapolation scheme is proposed. It is based on an analytical derivation for the uniform electron gas presented in this paper, and it provides an explanation why accurate GW quasiparticle spectra are easy to obtain for some compounds and very difficult for others.

  12. Multigrid time-accurate integration of Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.

    1993-01-01

    Efficient acceleration techniques typical of explicit steady-state solvers are extended to time-accurate calculations. Stability restrictions are greatly reduced by means of a fully implicit time discretization. A four-stage Runge-Kutta scheme with local time stepping, residual smoothing, and multigridding is used instead of traditional time-expensive factorizations. Some applications to natural and forced unsteady viscous flows show the capability of the procedure.

  13. Efficient and accurate numerical methods for the Klein-Gordon-Schroedinger equations

    SciTech Connect

    Bao, Weizhu . E-mail: bao@math.nus.edu.sg; Yang, Li . E-mail: yangli@nus.edu.sg

    2007-08-10

    In this paper, we present efficient, unconditionally stable and accurate numerical methods for approximations of the Klein-Gordon-Schroedinger (KGS) equations with/without damping terms. The key features of our methods are based on: (i) the application of a time-splitting spectral discretization for a Schroedinger-type equation in KGS (ii) the utilization of Fourier pseudospectral discretization for spatial derivatives in the Klein-Gordon equation in KGS (iii) the adoption of solving the ordinary differential equations (ODEs) in phase space analytically under appropriate chosen transmission conditions between different time intervals or applying Crank-Nicolson/leap-frog for linear/nonlinear terms for time derivatives. The numerical methods are either explicit or implicit but can be solved explicitly, unconditionally stable, and of spectral accuracy in space and second-order accuracy in time. Moreover, they are time reversible and time transverse invariant when there is no damping terms in KGS, conserve (or keep the same decay rate of) the wave energy as that in KGS without (or with a linear) damping term, keep the same dynamics of the mean value of the meson field, and give exact results for the plane-wave solution. Extensive numerical tests are presented to confirm the above properties of our numerical methods for KGS. Finally, the methods are applied to study solitary-wave collisions in one dimension (1D), as well as dynamics of a 2D problem in KGS.

  14. Accurate Energy Transaction Allocation using Path Integration and Interpolation

    NASA Astrophysics Data System (ADS)

    Bhide, Mandar Mohan

    This thesis investigates many of the popular cost allocation methods which are based on actual usage of the transmission network. The Energy Transaction Allocation (ETA) method originally proposed by A.Fradi, S.Brigonne and B.Wollenberg which gives unique advantage of accurately allocating the transmission network usage is discussed subsequently. Modified calculation of ETA based on simple interpolation technique is then proposed. The proposed methodology not only increase the accuracy of calculation but also decreases number of calculations to less than half of the number of calculations required in original ETAs.

  15. Accurate Anharmonic IR Spectra from Integrated Cc/dft Approach

    NASA Astrophysics Data System (ADS)

    Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien; Carnimeo, Ivan; Puzzarini, Cristina

    2014-06-01

    The recent implementation of the computation of infrared (IR) intensities beyond the double harmonic approximation [1] paved the route to routine calculations of infrared spectra for a wide set of molecular systems. Contrary to common beliefs, second-order perturbation theory is able to deliver results of high accuracy provided that anharmonic resonances are properly managed [1,2]. It has been already shown for several small closed- and open shell molecular systems that the differences between coupled cluster (CC) and DFT anharmonic wavenumbers are mainly due to the harmonic terms, paving the route to introduce effective yet accurate hybrid CC/DFT schemes [2]. In this work we present that hybrid CC/DFT models can be applied also to the IR intensities leading to the simulation of highly accurate fully anharmonic IR spectra for medium-size molecules, including ones of atmospheric interest, showing in all cases good agreement with experiment even in the spectral ranges where non-fundamental transitions are predominant[3]. [1] J. Bloino and V. Barone, J. Chem. Phys. 136, 124108 (2012) [2] V. Barone, M. Biczysko, J. Bloino, Phys. Chem. Chem. Phys., 16, 1759-1787 (2014) [3] I. Carnimeo, C. Puzzarini, N. Tasinato, P. Stoppa, A. P. Charmet, M. Biczysko, C. Cappelli and V. Barone, J. Chem. Phys., 139, 074310 (2013)

  16. Numerical integration of asymptotic solutions of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Thurston, Gaylen A.

    1989-01-01

    Classical asymptotic analysis of ordinary differential equations derives approximate solutions that are numerically stable. However, the analysis also leads to tedious expansions in powers of the relevant parameter for a particular problem. The expansions are replaced with integrals that can be evaluated by numerical integration. The resulting numerical solutions retain the linear independence that is the main advantage of asymptotic solutions. Examples, including the Falkner-Skan equation from laminar boundary layer theory, illustrate the method of asymptotic analysis with numerical integration.

  17. A novel numerical technique to obtain an accurate solution to the Thomas-Fermi equation

    NASA Astrophysics Data System (ADS)

    Parand, Kourosh; Yousefi, Hossein; Delkhosh, Mehdi; Ghaderi, Amin

    2016-07-01

    In this paper, a new algorithm based on the fractional order of rational Euler functions (FRE) is introduced to study the Thomas-Fermi (TF) model which is a nonlinear singular ordinary differential equation on a semi-infinite interval. This problem, using the quasilinearization method (QLM), converts to the sequence of linear ordinary differential equations to obtain the solution. For the first time, the rational Euler (RE) and the FRE have been made based on Euler polynomials. In addition, the equation will be solved on a semi-infinite domain without truncating it to a finite domain by taking FRE as basic functions for the collocation method. This method reduces the solution of this problem to the solution of a system of algebraic equations. We demonstrated that the new proposed algorithm is efficient for obtaining the value of y'(0) , y(x) and y'(x) . Comparison with some numerical and analytical solutions shows that the present solution is highly accurate.

  18. Numerical system utilising a Monte Carlo calculation method for accurate dose assessment in radiation accidents.

    PubMed

    Takahashi, F; Endo, A

    2007-01-01

    A system utilising radiation transport codes has been developed to derive accurate dose distributions in a human body for radiological accidents. A suitable model is quite essential for a numerical analysis. Therefore, two tools were developed to setup a 'problem-dependent' input file, defining a radiation source and an exposed person to simulate the radiation transport in an accident with the Monte Carlo calculation codes-MCNP and MCNPX. Necessary resources are defined by a dialogue method with a generally used personal computer for both the tools. The tools prepare human body and source models described in the input file format of the employed Monte Carlo codes. The tools were validated for dose assessment in comparison with a past criticality accident and a hypothesized exposure.

  19. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    PubMed

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  20. PolyPole-1: An accurate numerical algorithm for intra-granular fission gas release

    NASA Astrophysics Data System (ADS)

    Pizzocri, D.; Rabiti, C.; Luzzi, L.; Barani, T.; Van Uffelen, P.; Pastore, G.

    2016-09-01

    The transport of fission gas from within the fuel grains to the grain boundaries (intra-granular fission gas release) is a fundamental controlling mechanism of fission gas release and gaseous swelling in nuclear fuel. Hence, accurate numerical solution of the corresponding mathematical problem needs to be included in fission gas behaviour models used in fuel performance codes. Under the assumption of equilibrium between trapping and resolution, the process can be described mathematically by a single diffusion equation for the gas atom concentration in a grain. In this paper, we propose a new numerical algorithm (PolyPole-1) to efficiently solve the fission gas diffusion equation in time-varying conditions. The PolyPole-1 algorithm is based on the analytic modal solution of the diffusion equation for constant conditions, combined with polynomial corrective terms that embody the information on the deviation from constant conditions. The new algorithm is verified by comparing the results to a finite difference solution over a large number of randomly generated operation histories. Furthermore, comparison to state-of-the-art algorithms used in fuel performance codes demonstrates that the accuracy of PolyPole-1 is superior to other algorithms, with similar computational effort. Finally, the concept of PolyPole-1 may be extended to the solution of the general problem of intra-granular fission gas diffusion during non-equilibrium trapping and resolution, which will be the subject of future work.

  1. Keeping the edge: an accurate numerical method to solve the stream power law

    NASA Astrophysics Data System (ADS)

    Campforts, B.; Govers, G.

    2015-12-01

    Bedrock rivers set the base level of surrounding hill slopes and mediate the dynamic interplay between mountain building and denudation. The propensity of rivers to preserve pulses of increased tectonic uplift also allows to reconstruct long term uplift histories from longitudinal river profiles. An accurate reconstruction of river profile development at different timescales is therefore essential. Long term river development is typically modeled by means of the stream power law. Under specific conditions this equation can be solved analytically but numerical Finite Difference Methods (FDMs) are most frequently used. Nonetheless, FDMs suffer from numerical smearing, especially at knickpoint zones which are key to understand transient landscapes. Here, we solve the stream power law by means of a Finite Volume Method (FVM) which is Total Variation Diminishing (TVD). Total volume methods are designed to simulate sharp discontinuities making them very suitable to model river incision. In contrast to FDMs, the TVD_FVM is well capable of preserving knickpoints as illustrated for the fast propagating Niagara falls. Moreover, we show that the TVD_FVM performs much better when reconstructing uplift at timescales exceeding 100 Myr, using Eastern Australia as an example. Finally, uncertainty associated with parameter calibration is dramatically reduced when the TVD_FVM is applied. Therefore, the use of a TVD_FVM to understand long term landscape evolution is an important addition to the toolbox at the disposition of geomorphologists.

  2. Numerical Computation of a Continuous-thrust State Transition Matrix Incorporating Accurate Hardware and Ephemeris Models

    NASA Technical Reports Server (NTRS)

    Ellison, Donald; Conway, Bruce; Englander, Jacob

    2015-01-01

    A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.

  3. Towards an accurate understanding of UHMWPE visco-dynamic behaviour for numerical modelling of implants.

    PubMed

    Quinci, Federico; Dressler, Matthew; Strickland, Anthony M; Limbert, Georges

    2014-04-01

    Considerable progress has been made in understanding implant wear and developing numerical models to predict wear for new orthopaedic devices. However any model of wear could be improved through a more accurate representation of the biomaterial mechanics, including time-varying dynamic and inelastic behaviour such as viscosity and plastic deformation. In particular, most computational models of wear of UHMWPE implement a time-invariant version of Archard's law that links the volume of worn material to the contact pressure between the metal implant and the polymeric tibial insert. During in-vivo conditions, however, the contact area is a time-varying quantity and is therefore dependent upon the dynamic deformation response of the material. From this observation one can conclude that creep deformations of UHMWPE may be very important to consider when conducting computational wear analyses, in stark contrast to what can be found in the literature. In this study, different numerical modelling techniques are compared with experimental creep testing on a unicondylar knee replacement system in a physiologically representative context. Linear elastic, plastic and time-varying visco-dynamic models are benchmarked using literature data to predict contact deformations, pressures and areas. The aim of this study is to elucidate the contributions of viscoelastic and plastic effects on these surface quantities. It is concluded that creep deformations have a significant effect on the contact pressure measured (experiment) and calculated (computational models) at the surface of the UHMWPE unicondylar insert. The use of a purely elastoplastic constitutive model for UHMWPE lead to compressive deformations of the insert which are much smaller than those predicted by a creep-capturing viscoelastic model (and those measured experimentally). This shows again the importance of including creep behaviour into a constitutive model in order to predict the right level of surface deformation

  4. Efficient numerical integration of neutrino oscillations in matter

    NASA Astrophysics Data System (ADS)

    Casas, F.; D'Olivo, J. C.; Oteo, J. A.

    2016-12-01

    A special purpose solver, based on the Magnus expansion, well suited for the integration of the linear three neutrino oscillations equations in matter is proposed. The computations are speeded up to two orders of magnitude with respect to a general numerical integrator, a fact that could smooth the way for massive numerical integration concomitant with experimental data analyses. Detailed illustrations about numerical procedure and computer time costs are provided.

  5. An accurate, robust, and easy-to-implement method for integration over arbitrary polyhedra: Application to embedded interface methods

    NASA Astrophysics Data System (ADS)

    Sudhakar, Y.; Moitinho de Almeida, J. P.; Wall, Wolfgang A.

    2014-09-01

    We present an accurate method for the numerical integration of polynomials over arbitrary polyhedra. Using the divergence theorem, the method transforms the domain integral into integrals evaluated over the facets of the polyhedra. The necessity of performing symbolic computation during such transformation is eliminated by using one dimensional Gauss quadrature rule. The facet integrals are computed with the help of quadratures available for triangles and quadrilaterals. Numerical examples, in which the proposed method is used to integrate the weak form of the Navier-Stokes equations in an embedded interface method (EIM), are presented. The results show that our method is as accurate and generalized as the most widely used volume decomposition based methods. Moreover, since the method involves neither volume decomposition nor symbolic computations, it is much easier for computer implementation. Also, the present method is more efficient than other available integration methods based on the divergence theorem. Efficiency of the method is also compared with the volume decomposition based methods and moment fitting methods. To our knowledge, this is the first article that compares both accuracy and computational efficiency of methods relying on volume decomposition and those based on the divergence theorem.

  6. Advanced numerical techniques for accurate unsteady simulations of a wingtip vortex

    NASA Astrophysics Data System (ADS)

    Ahmad, Shakeel

    A numerical technique is developed to simulate the vortices associated with stationary and flapping wings. The Unsteady Reynolds-Averaged Navier-Stokes (URANS) equations are used over an unstructured grid. The present work assesses the locations of the origins of vortex generation, models those locations and develops a systematic mesh refinement strategy to simulate vortices more accurately using the URANS model. The vortex center plays a key role in the analysis of the simulation data. A novel approach to locating a vortex center is also developed referred to as the Max-Max criterion. Experimental validation of the simulated vortex from a stationary NACA0012 wing is achieved. The tangential velocity along the core of the vortex falls within five percent of the experimental data in the case of the stationary NACA0012 simulation. The wing surface pressure coefficient also matches with the experimental data. The refinement techniques are then focused on unsteady simulations of pitching and dual-mode wing flapping. Tip vortex strength, location, and wing surface pressure are analyzed. Links to vortex behavior and wing motion are inferred. Key words: vortex, tangential velocity, Cp, vortical flow, unsteady vortices, URANS, Max-Max, Vortex center

  7. Error Estimates for Numerical Integration Rules

    ERIC Educational Resources Information Center

    Mercer, Peter R.

    2005-01-01

    The starting point for this discussion of error estimates is the fact that integrals that arise in Fourier series have properties that can be used to get improved bounds. This idea is extended to more general situations.

  8. Numerical quadrature methods for integrals of singular periodic functions and their application to singular and weakly singular integral equations

    NASA Technical Reports Server (NTRS)

    Sidi, A.; Israeli, M.

    1986-01-01

    High accuracy numerical quadrature methods for integrals of singular periodic functions are proposed. These methods are based on the appropriate Euler-Maclaurin expansions of trapezoidal rule approximations and their extrapolations. They are used to obtain accurate quadrature methods for the solution of singular and weakly singular Fredholm integral equations. Such periodic equations are used in the solution of planar elliptic boundary value problems, elasticity, potential theory, conformal mapping, boundary element methods, free surface flows, etc. The use of the quadrature methods is demonstrated with numerical examples.

  9. Quantum Calisthenics: Gaussians, The Path Integral and Guided Numerical Approximations

    SciTech Connect

    Weinstein, Marvin; /SLAC

    2009-02-12

    It is apparent to anyone who thinks about it that, to a large degree, the basic concepts of Newtonian physics are quite intuitive, but quantum mechanics is not. My purpose in this talk is to introduce you to a new, much more intuitive way to understand how quantum mechanics works. I begin with an incredibly easy way to derive the time evolution of a Gaussian wave-packet for the case free and harmonic motion without any need to know the eigenstates of the Hamiltonian. This discussion is completely analytic and I will later use it to relate the solution for the behavior of the Gaussian packet to the Feynman path-integral and stationary phase approximation. It will be clear that using the information about the evolution of the Gaussian in this way goes far beyond what the stationary phase approximation tells us. Next, I introduce the concept of the bucket brigade approach to dealing with problems that cannot be handled totally analytically. This approach combines the intuition obtained in the initial discussion, as well as the intuition obtained from the path-integral, with simple numerical tools. My goal is to show that, for any specific process, there is a simple Hilbert space interpretation of the stationary phase approximation. I will then argue that, from the point of view of numerical approximations, the trajectory obtained from my generalization of the stationary phase approximation specifies that subspace of the full Hilbert space that is needed to compute the time evolution of the particular state under the full Hamiltonian. The prescription I will give is totally non-perturbative and we will see, by the grace of Maple animations computed for the case of the anharmonic oscillator Hamiltonian, that this approach allows surprisingly accurate computations to be performed with very little work. I think of this approach to the path-integral as defining what I call a guided numerical approximation scheme. After the discussion of the anharmonic oscillator I will

  10. A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction

    NASA Technical Reports Server (NTRS)

    Bockelie, Michael J.; Eiseman, Peter R.

    1990-01-01

    A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.

  11. Accurate numerical forward model for optimal retracking of SIRAL2 SAR echoes over open ocean

    NASA Astrophysics Data System (ADS)

    Phalippou, L.; Demeestere, F.

    2011-12-01

    The SAR mode of SIRAL-2 on board Cryosat-2 has been designed to measure primarily sea-ice and continental ice (Wingham et al. 2005). In 2005, K. Raney (KR, 2005) pointed out the improvements brought by SAR altimeter for open ocean. KR results were mostly based on 'rule of thumb' considerations on speckle noise reduction due to the higher PRF and to speckle decorrelation after SAR processing. In 2007, Phalippou and Enjolras (PE,2007) provided the theoretical background for optimal retracking of SAR echoes over ocean with a focus on the forward modelling of the power-waveforms. The accuracies of geophysical parameters (range, significant wave heights, and backscattering coefficient) retrieved from SAR altimeter data were derived accounting for SAR echo shape and speckle noise accurate modelling. The step forward to optimal retracking using numerical forward model (NFM) was also pointed out. NFM of the power waveform avoids analytical approximation, a warranty to minimise the geophysical dependent biases in the retrieval. NFM have been used for many years, in operational meteorology in particular, for retrieving temperature and humidity profiles from IR and microwave radiometers as the radiative transfer function is complex (Eyre, 1989). So far this technique was not used in the field of ocean conventional altimetry as analytical models (e.g. Brown's model for instance) were found to give sufficient accuracy. However, although NFM seems desirable even for conventional nadir altimetry, it becomes inevitable if one wish to process SAR altimeter data as the transfer function is too complex to be approximated by a simple analytical function. This was clearly demonstrated in PE 2007. The paper describes the background to SAR data retracking over open ocean. Since PE 2007 improvements have been brought to the forward model and it is shown that the altimeter on-ground and in flight characterisation (e.g antenna pattern range impulse response, azimuth impulse response

  12. Physical and Numerical Model Studies of Cross-flow Turbines Towards Accurate Parameterization in Array Simulations

    NASA Astrophysics Data System (ADS)

    Wosnik, M.; Bachant, P.

    2014-12-01

    Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of

  13. Numerical integration of ordinary differential equations of various orders

    NASA Technical Reports Server (NTRS)

    Gear, C. W.

    1969-01-01

    Report describes techniques for the numerical integration of differential equations of various orders. Modified multistep predictor-corrector methods for general initial-value problems are discussed and new methods are introduced.

  14. On the numeric integration of dynamic attitude equations

    NASA Technical Reports Server (NTRS)

    Crouch, P. E.; Yan, Y.; Grossman, Robert

    1992-01-01

    We describe new types of numerical integration algorithms developed by the authors. The main aim of the algorithms is to numerically integrate differential equations which evolve on geometric objects, such as the rotation group. The algorithms provide iterates which lie on the prescribed geometric object, either exactly, or to some prescribed accuracy, independent of the order of the algorithm. This paper describes applications of these algorithms to the evolution of the attitude of a rigid body.

  15. Approximate and exact numerical integration of the gas dynamic equations

    NASA Technical Reports Server (NTRS)

    Lewis, T. S.; Sirovich, L.

    1979-01-01

    A highly accurate approximation and a rapidly convergent numerical procedure are developed for two dimensional steady supersonic flow over an airfoil. Examples are given for a symmetric airfoil over a range of Mach numbers. Several interesting features are found in the calculation of the tail shock and the flow behind the airfoil.

  16. Third-order-accurate numerical methods for efficient, large time-step solutions of mixed linear and nonlinear problems

    SciTech Connect

    Cobb, J.W.

    1995-02-01

    There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.

  17. Canonical algorithms for numerical integration of charged particle motion equations

    NASA Astrophysics Data System (ADS)

    Efimov, I. N.; Morozov, E. A.; Morozova, A. R.

    2017-02-01

    A technique for numerically integrating the equation of charged particle motion in a magnetic field is considered. It is based on the canonical transformations of the phase space in Hamiltonian mechanics. The canonical transformations make the integration process stable against counting error accumulation. The integration algorithms contain a minimum possible amount of arithmetics and can be used to design accelerators and devices of electron and ion optics.

  18. Numerical solution of optimal control problems using multiple-interval integral Gegenbauer pseudospectral methods

    NASA Astrophysics Data System (ADS)

    Tang, Xiaojun

    2016-04-01

    The main purpose of this work is to provide multiple-interval integral Gegenbauer pseudospectral methods for solving optimal control problems. The latest developed single-interval integral Gauss/(flipped Radau) pseudospectral methods can be viewed as special cases of the proposed methods. We present an exact and efficient approach to compute the mesh pseudospectral integration matrices for the Gegenbauer-Gauss and flipped Gegenbauer-Gauss-Radau points. Numerical results on benchmark optimal control problems confirm the ability of the proposed methods to obtain highly accurate solutions.

  19. AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)

    EPA Science Inventory

    Abstract

    A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...

  20. Accurate path integration in continuous attractor network models of grid cells.

    PubMed

    Burak, Yoram; Fiete, Ila R

    2009-02-01

    Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of approximately 10-100 meters and approximately 1-10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other.

  1. An Accurate Heading Solution using MEMS-based Gyroscope and Magnetometer Integrated System (Preliminary Results)

    NASA Astrophysics Data System (ADS)

    El-Diasty, M.

    2014-11-01

    An accurate heading solution is required for many applications and it can be achieved by high grade (high cost) gyroscopes (gyros) which may not be suitable for such applications. Micro-Electro Mechanical Systems-based (MEMS) is an emerging technology, which has the potential of providing heading solution using a low cost MEMS-based gyro. However, MEMS-gyro-based heading solution drifts significantly over time. The heading solution can also be estimated using MEMS-based magnetometer by measuring the horizontal components of the Earth magnetic field. The MEMS-magnetometer-based heading solution does not drift over time, but are contaminated by high level of noise and may be disturbed by the presence of magnetic field sources such as metal objects. This paper proposed an accurate heading estimation procedure based on the integration of MEMS-based gyro and magnetometer measurements that correct gyro and magnetometer measurements where gyro angular rates of changes are estimated using magnetometer measurements and then integrated with the measured gyro angular rates of changes with a robust filter to estimate the heading. The proposed integration solution is implemented using two data sets; one was conducted in static mode without magnetic disturbances and the second was conducted in kinematic mode with magnetic disturbances. The results showed that the proposed integrated heading solution provides accurate, smoothed and undisturbed solution when compared with magnetometerbased and gyro-based heading solutions.

  2. Extremely Fast Numerical Integration of Ocean Surface Wave Dynamics

    DTIC Science & Technology

    2007-09-30

    1) is a natural two-space-dimension extension of the KdV equation . The periodic KP solutions include directional spreading in the wave field: y η...of the nonlinear preprocessor in the new approach for obtaining numerical solutions to nonlinear wave equations . I will now do so, but without many...analytical study and extremely fast numerical integration of the extended nonlinear Schroedinger equation for fully three dimensional wave motion

  3. Development of accurate waveform models for eccentric compact binaries with numerical relativity simulations

    NASA Astrophysics Data System (ADS)

    Huerta, Eliu; Agarwal, Bhanu; Chua, Alvin; George, Daniel; Haas, Roland; Hinder, Ian; Kumar, Prayush; Moore, Christopher; Pfeiffer, Harald

    2017-01-01

    We recently constructed an inspiral-merger-ringdown (IMR) waveform model to describe the dynamical evolution of compact binaries on eccentric orbits, and used this model to constrain the eccentricity with which the gravitational wave transients currently detected by LIGO could be effectively recovered with banks of quasi-circular templates. We now present the second generation of this model, which is calibrated using a large catalog of eccentric numerical relativity simulations. We discuss the new features of this model, and show that its enhance accuracy makes it a powerful tool to detect eccentric signals with LIGO.

  4. Numerical Methodology for Coupled Time-Accurate Simulations of Primary and Secondary Flowpaths in Gas Turbines

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.

    2006-01-01

    Detailed information of the flow-fields in the secondary flowpaths and their interaction with the primary flows in gas turbine engines is necessary for successful designs with optimized secondary flow streams. Present work is focused on the development of a simulation methodology for coupled time-accurate solutions of the two flowpaths. The secondary flowstream is treated using SCISEAL, an unstructured adaptive Cartesian grid code developed for secondary flows and seals, while the mainpath flow is solved using TURBO, a density based code with capability of resolving rotor-stator interaction in multi-stage machines. An interface is being tested that links the two codes at the rim seal to allow data exchange between the two codes for parallel, coupled execution. A description of the coupling methodology and the current status of the interface development is presented. Representative steady-state solutions of the secondary flow in the UTRC HP Rig disc cavity are also presented.

  5. Efficient and Accurate Explicit Integration Algorithms with Application to Viscoplastic Models

    NASA Technical Reports Server (NTRS)

    Arya, Vinod K.

    1994-01-01

    Several explicit integration algorithms with self-adative time integration strategies are developed and investigated for efficiency and accuracy. These algorithms involve the Runge-Kutta second order, the lower Runge-Kutta method of orders one and two, and the exponential integration method. The algorithms are applied to viscoplastic models put forth by Freed and Verrilli and Bodner and Partom for thermal/mechanical loadings (including tensile, relaxation, and cyclic loadings). The large amount of computations performed showed that, for comparable accuracy, the efficiency of an integration algorithm depends significantly on the type of application (loading). However, in general, for the aforementioned loadings and viscoplastic models, the exponential integration algorithm with the proposed self-adaptive time integration strategy worked more (or comparably) efficiently and accurately than the other integration algorithms. Using this strategy for integrating viscoplastic models may lead to considerable savings in computer time (better efficiency) without adversely affecting the accuracy of the results. This conclusion should encourage the utilization of viscoplastic models in the stress analysis and design of structural components.

  6. An accurate and efficient acoustic eigensolver based on a fast multipole BEM and a contour integral method

    NASA Astrophysics Data System (ADS)

    Zheng, Chang-Jun; Gao, Hai-Feng; Du, Lei; Chen, Hai-Bo; Zhang, Chuanzeng

    2016-01-01

    An accurate numerical solver is developed in this paper for eigenproblems governed by the Helmholtz equation and formulated through the boundary element method. A contour integral method is used to convert the nonlinear eigenproblem into an ordinary eigenproblem, so that eigenvalues can be extracted accurately by solving a set of standard boundary element systems of equations. In order to accelerate the solution procedure, the parameters affecting the accuracy and efficiency of the method are studied and two contour paths are compared. Moreover, a wideband fast multipole method is implemented with a block IDR (s) solver to reduce the overall solution cost of the boundary element systems of equations with multiple right-hand sides. The Burton-Miller formulation is employed to identify the fictitious eigenfrequencies of the interior acoustic problems with multiply connected domains. The actual effect of the Burton-Miller formulation on tackling the fictitious eigenfrequency problem is investigated and the optimal choice of the coupling parameter as α = i / k is confirmed through exterior sphere examples. Furthermore, the numerical eigenvalues obtained by the developed method are compared with the results obtained by the finite element method to show the accuracy and efficiency of the developed method.

  7. Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.

  8. Simple and Efficient Numerical Evaluation of Near-Hypersingular Integrals

    NASA Technical Reports Server (NTRS)

    Fink, Patricia W.; Wilton, D. R.; Khayat, Michael A.

    2007-01-01

    Simple and efficient numerical procedures for evaluating the gradient of Newton-type potentials are presented. Convergences of both normal and tangential components of the gradient are examined. The convergence of the vector potential is also examined, and it is shown that the scheme for handling near-hypersingular integrals also is effective for the nearly singular potential terms.

  9. Monograph - The Numerical Integration of Ordinary Differential Equations.

    ERIC Educational Resources Information Center

    Hull, T. E.

    The materials presented in this monograph are intended to be included in a course on ordinary differential equations at the upper division level in a college mathematics program. These materials provide an introduction to the numerical integration of ordinary differential equations, and they can be used to supplement a regular text on this…

  10. Integrated product definition representation for agile numerical control applications

    SciTech Connect

    Simons, W.R. Jr.; Brooks, S.L.; Kirk, W.J. III; Brown, C.W.

    1994-11-01

    Realization of agile manufacturing capabilities for a virtual enterprise requires the integration of technology, management, and work force into a coordinated, interdependent system. This paper is focused on technology enabling tools for agile manufacturing within a virtual enterprise specifically relating to Numerical Control (N/C) manufacturing activities and product definition requirements for these activities.

  11. The use of experimental bending tests to more accurate numerical description of TBC damage process

    NASA Astrophysics Data System (ADS)

    Sadowski, T.; Golewski, P.

    2016-04-01

    Thermal barrier coatings (TBCs) have been extensively used in aircraft engines to protect critical engine parts such as blades and combustion chambers, which are exposed to high temperatures and corrosive environment. The blades of turbine engines are additionally exposed to high mechanical loads. These loads are created by the high rotational speed of the rotor (30 000 rot/min), causing the tensile and bending stresses. Therefore, experimental testing of coated samples is necessary in order to determine strength properties of TBCs. Beam samples with dimensions 50×10×2 mm were used in those studies. The TBC system consisted of 150 μm thick bond coat (NiCoCrAlY) and 300 μm thick top coat (YSZ) made by APS (air plasma spray) process. Samples were tested by three-point bending test with various loads. After bending tests, the samples were subjected to microscopic observation to determine the quantity of cracks and their depth. The above mentioned results were used to build numerical model and calibrate material data in Abaqus program. Brittle cracking damage model was applied for the TBC layer, which allows to remove elements after reaching criterion. Surface based cohesive behavior was used to model the delamination which may occur at the boundary between bond coat and top coat.

  12. Fast and accurate numerical method for predicting gas chromatography retention time.

    PubMed

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-08-07

    Predictive modeling for gas chromatography compound retention depends on the retention factor (ki) and on the flow of the mobile phase. Thus, different approaches for determining an analyte ki in column chromatography have been developed. The main one is based on the thermodynamic properties of the component and on the characteristics of the stationary phase. These models can be used to estimate the parameters and to optimize the programming of temperatures, in gas chromatography, for the separation of compounds. Different authors have proposed the use of numerical methods for solving these models, but these methods demand greater computational time. Hence, a new method for solving the predictive modeling of analyte retention time is presented. This algorithm is an alternative to traditional methods because it transforms its attainments into root determination problems within defined intervals. The proposed approach allows for tr calculation, with accuracy determined by the user of the methods, and significant reductions in computational time; it can also be used to evaluate the performance of other prediction methods.

  13. Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration

    SciTech Connect

    Masalma, Yahya; Jiao, Yu

    2010-10-01

    We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.

  14. Integration of numerical analysis tools for automated numerical optimization of a transportation package design

    SciTech Connect

    Witkowski, W.R.; Eldred, M.S.; Harding, D.C.

    1994-09-01

    The use of state-of-the-art numerical analysis tools to determine the optimal design of a radioactive material (RAM) transportation container is investigated. The design of a RAM package`s components involves a complex coupling of structural, thermal, and radioactive shielding analyses. The final design must adhere to very strict design constraints. The current technique used by cask designers is uncoupled and involves designing each component separately with respect to its driving constraint. With the use of numerical optimization schemes, the complex couplings can be considered directly, and the performance of the integrated package can be maximized with respect to the analysis conditions. This can lead to more efficient package designs. Thermal and structural accident conditions are analyzed in the shape optimization of a simplified cask design. In this paper, details of the integration of numerical analysis tools, development of a process model, nonsmoothness difficulties with the optimization of the cask, and preliminary results are discussed.

  15. Ensemble-type numerical uncertainty information from single model integrations

    SciTech Connect

    Rauser, Florian Marotzke, Jochem; Korn, Peter

    2015-07-01

    We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of the influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.

  16. Correcting numerical integration errors caused by small aliasing errors

    SciTech Connect

    Smallwood, D.O.

    1997-11-01

    Small sampling errors can have a large effect on numerically integrated waveforms. An example is the integration of acceleration to compute velocity and displacement waveforms. These large integration errors complicate checking the suitability of the acceleration waveform for reproduction on shakers. For waveforms typically used for shaker reproduction, the errors become significant when the frequency content of the waveform spans a large frequency range. It is shown that these errors are essentially independent of the numerical integration method used, and are caused by small aliasing errors from the frequency components near the Nyquist frequency. A method to repair the integrated waveforms is presented. The method involves using a model of the acceleration error, and fitting this model to the acceleration, velocity, and displacement waveforms to force the waveforms to fit the assumed initial and final values. The correction is then subtracted from the acceleration before integration. The method is effective where the errors are isolated to a small section of the time history. It is shown that the common method to repair these errors using a high pass filter is sometimes ineffective for this class of problem.

  17. Novel electromagnetic surface integral equations for highly accurate computations of dielectric bodies with arbitrarily low contrasts

    SciTech Connect

    Erguel, Ozguer; Guerel, Levent

    2008-12-01

    We present a novel stabilization procedure for accurate surface formulations of electromagnetic scattering problems involving three-dimensional dielectric objects with arbitrarily low contrasts. Conventional surface integral equations provide inaccurate results for the scattered fields when the contrast of the object is low, i.e., when the electromagnetic material parameters of the scatterer and the host medium are close to each other. We propose a stabilization procedure involving the extraction of nonradiating currents and rearrangement of the right-hand side of the equations using fictitious incident fields. Then, only the radiating currents are solved to calculate the scattered fields accurately. This technique can easily be applied to the existing implementations of conventional formulations, it requires negligible extra computational cost, and it is also appropriate for the solution of large problems with the multilevel fast multipole algorithm. We show that the stabilization leads to robust formulations that are valid even for the solutions of extremely low-contrast objects.

  18. Stability of numerical integration techniques for transient rotor dynamics

    NASA Technical Reports Server (NTRS)

    Kascak, A. F.

    1977-01-01

    A finite element model of a rotor bearing system was analyzed to determine the stability limits of the forward, backward, and centered Euler; Runge-Kutta; Milne; and Adams numerical integration techniques. The analysis concludes that the highest frequency mode determines the maximum time step for a stable solution. Thus, the number of mass elements should be minimized. Increasing the damping can sometimes cause numerical instability. For a uniform shaft, with 10 mass elements, operating at approximately the first critical speed, the maximum time step for the Runge-Kutta, Milne, and Adams methods is that which corresponds to approximately 1 degree of shaft movement. This is independent of rotor dimensions.

  19. Microwave Breast Imaging System Prototype with Integrated Numerical Characterization

    PubMed Central

    Haynes, Mark; Stang, John; Moghaddam, Mahta

    2012-01-01

    The increasing number of experimental microwave breast imaging systems and the need to properly model them have motivated our development of an integrated numerical characterization technique. We use Ansoft HFSS and a formalism we developed previously to numerically characterize an S-parameter- based breast imaging system and link it to an inverse scattering algorithm. We show successful reconstructions of simple test objects using synthetic and experimental data. We demonstrate the sensitivity of image reconstructions to knowledge of the background dielectric properties and show the limits of the current model. PMID:22481906

  20. Numerical solution of nonlinear Hammerstein fuzzy functional integral equations

    NASA Astrophysics Data System (ADS)

    Enkov, Svetoslav; Georgieva, Atanaska; Nikolla, Renato

    2016-12-01

    In this work we investigate nonlinear Hammerstein fuzzy functional integral equation. Our aim is to provide an efficient iterative method of successive approximations by optimal quadrature formula for classes of fuzzy number-valued functions of Lipschitz type to approximate the solution. We prove the convergence of the method by Banach's fixed point theorem and investigate the numerical stability of the presented method with respect to the choice of the first iteration. Finally, illustrative numerical experiment demonstrate the accuracy and the convergence of the proposed method.

  1. Path Integrals and Exotic Options:. Methods and Numerical Results

    NASA Astrophysics Data System (ADS)

    Bormetti, G.; Montagna, G.; Moreni, N.; Nicrosini, O.

    2005-09-01

    In the framework of Black-Scholes-Merton model of financial derivatives, a path integral approach to option pricing is presented. A general formula to price path dependent options on multidimensional and correlated underlying assets is obtained and implemented by means of various flexible and efficient algorithms. As an example, we detail the case of Asian call options. The numerical results are compared with those obtained with other procedures used in quantitative finance and found to be in good agreement. In particular, when pricing at the money (ATM) and out of the money (OTM) options, path integral exhibits competitive performances.

  2. New Numerical Integrators Based on Solvability and Splitting

    DTIC Science & Technology

    2007-11-02

    display a currently valid OMB control number. 1. REPORT DATE 03 JAN 2005 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE New...Group Methods And Control Theory Workshop Held on 28 June 2004 - 1 July 2004., The original document contains color images. 14. ABSTRACT 15...Mechanics, NMR spectroscopy, infrared divergences in QED, control theory,... 1.1 Magnus expansion (IV) NEW NUMERICAL INTEGRATORS BASED ON SOLVABILITY AND

  3. Accurate and automatic extrinsic calibration method for blade measurement system integrated by different optical sensors

    NASA Astrophysics Data System (ADS)

    He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu

    2014-11-01

    Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.

  4. A robust and accurate numerical method for transcritical turbulent flows at supercritical pressure with an arbitrary equation of state

    SciTech Connect

    Kawai, Soshi; Terashima, Hiroshi; Negishi, Hideyo

    2015-11-01

    This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture the steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier–Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.

  5. An accurate numerical solution to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in rivers

    NASA Astrophysics Data System (ADS)

    Stecca, Guglielmo; Siviglia, Annunziato; Blom, Astrid

    2016-07-01

    We present an accurate numerical approximation to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in one space dimension. Our solution procedure originates from the fully-unsteady matrix-vector formulation developed in [54]. The principal part of the problem is solved by an explicit Finite Volume upwind method of the path-conservative type, by which all the variables are updated simultaneously in a coupled fashion. The solution to the principal part is embedded into a splitting procedure for the treatment of frictional source terms. The numerical scheme is extended to second-order accuracy and includes a bookkeeping procedure for handling the evolution of size stratification in the substrate. We develop a concept of balancedness for the vertical mass flux between the substrate and active layer under bed degradation, which prevents the occurrence of non-physical oscillations in the grainsize distribution of the substrate. We suitably modify the numerical scheme to respect this principle. We finally verify the accuracy in our solution to the equations, and its ability to reproduce one-dimensional morphodynamics due to streamwise and vertical sorting, using three test cases. In detail, (i) we empirically assess the balancedness of vertical mass fluxes under degradation; (ii) we study the convergence to the analytical linearised solution for the propagation of infinitesimal-amplitude waves [54], which is here employed for the first time to assess a mixed-sediment model; (iii) we reproduce Ribberink's E8-E9 flume experiment [46].

  6. Simple and Efficient Numerical Evaluation of Near-Hypersingular Integrals

    NASA Technical Reports Server (NTRS)

    Fink, Patrick W.; Wilton, Donald R.; Khayat, Michael A.

    2007-01-01

    Recently, significant progress has been made in the handling of singular and nearly-singular potential integrals that commonly arise in the Boundary Element Method (BEM). To facilitate object-oriented programming and handling of higher order basis functions, cancellation techniques are favored over techniques involving singularity subtraction. However, gradients of the Newton-type potentials, which produce hypersingular kernels, are also frequently required in BEM formulations. As is the case with the potentials, treatment of the near-hypersingular integrals has proven more challenging than treating the limiting case in which the observation point approaches the surface. Historically, numerical evaluation of these near-hypersingularities has often involved a two-step procedure: a singularity subtraction to reduce the order of the singularity, followed by a boundary contour integral evaluation of the extracted part. Since this evaluation necessarily links basis function, Green s function, and the integration domain (element shape), the approach ill fits object-oriented programming concepts. Thus, there is a need for cancellation-type techniques for efficient numerical evaluation of the gradient of the potential. Progress in the development of efficient cancellation-type procedures for the gradient potentials was recently presented. To the extent possible, a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. However, since the gradient kernel involves singularities of different orders, we also require that the transformation leaves remaining terms that are analytic. The terms "normal" and "tangential" are used herein with reference to the source element. Also, since computational formulations often involve the numerical evaluation of both potentials and their gradients, it is highly desirable that a single integration procedure efficiently handles both.

  7. Fast and accurate computation of system matrix for area integral model-based algebraic reconstruction technique

    NASA Astrophysics Data System (ADS)

    Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua

    2014-11-01

    Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.

  8. Wang-Landau integration --- The application of Wang-Landau sampling in numerical integration

    NASA Astrophysics Data System (ADS)

    Li, Ying Wai; Wuest, Thomas; Landau, David P.; Qing Lin, Hai

    2007-03-01

    Wang-Landau sampling was first introduced to simulate the density of states in energy space for various physical systems. This technique can be extended to numerical integrations due to certain similarities in nature of these two problems. It can be further applied to study quantum many-body systems. We report the feasibility of this application by discussing the correspondence between Wang-Landau integration and Wang-Landau sampling for Ising model. Numerical results for 1D and 2D integrations are shown. In particular, the utilization of this algorithm in the periodic lattice Anderson model is discussed as an illustrative example.

  9. Numerical integration of massive two-loop Mellin-Barnes integrals in Minkowskian regions

    NASA Astrophysics Data System (ADS)

    Dubovyk, I.; Gluza, J.; Riemann, T.; Usovitsch, J.

    Mellin-Barnes (MB) techniques applied to integrals emerging in particle physics perturbative calculations are summarized. New versions of AMBRE packages which construct planar and nonplanar MB representations are shortly discussed. The numerical package MBnumerics.m is presented for the first time which is able to calculate with a high precision multidimensional MB integrals in Minkowskian regions. Examples are given for massive vertex integrals which include threshold effects and several scale parameters.

  10. Examination of Numerical Integration Accuracy and Modeling for GRACE-FO and GRACE-II

    NASA Astrophysics Data System (ADS)

    McCullough, C.; Bettadpur, S.

    2012-12-01

    As technological advances throughout the field of satellite geodesy improve the accuracy of satellite measurements, numerical methods and algorithms must be able to keep pace. Currently, the Gravity Recovery and Climate Experiment's (GRACE) dual one-way microwave ranging system can determine changes in inter-satellite range to a precision of a few microns; however, with the advent of laser measurement systems nanometer precision ranging is a realistic possibility. With this increase in measurement accuracy, a reevaluation of the accuracy inherent in the linear multi-step numerical integration methods is necessary. Two areas where this can be a primary concern are the ability of the numerical integration methods to accurately predict the satellite's state in the presence of numerous small accelerations due to operation of the spacecraft attitude control thrusters, and due to small, point-mass anomalies on the surface of the Earth. This study attempts to quantify and minimize these numerical errors in an effort to improve the accuracy of modeling and propagation of these perturbations; helping to provide further insight into the behavior and evolution of the Earth's gravity field from the more capable gravity missions in the future.

  11. Development of highly accurate approximate scheme for computing the charge transfer integral

    NASA Astrophysics Data System (ADS)

    Pershin, Anton; Szalay, Péter G.

    2015-08-01

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the "exact" scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the "exact" calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.

  12. Development of highly accurate approximate scheme for computing the charge transfer integral.

    PubMed

    Pershin, Anton; Szalay, Péter G

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the "exact" scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the "exact" calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.

  13. Accurate integral equation theory for the central force model of liquid water and ionic solutions

    NASA Astrophysics Data System (ADS)

    Ichiye, Toshiko; Haymet, A. D. J.

    1988-10-01

    The atom-atom pair correlation functions and thermodynamics of the central force model of water, introduced by Lemberg, Stillinger, and Rahman, have been calculated accurately by an integral equation method which incorporates two new developments. First, a rapid new scheme has been used to solve the Ornstein-Zernike equation. This scheme combines the renormalization methods of Allnatt, and Rossky and Friedman with an extension of the trigonometric basis-set solution of Labik and co-workers. Second, by adding approximate ``bridge'' functions to the hypernetted-chain (HNC) integral equation, we have obtained predictions for liquid water in which the hydrogen bond length and number are in good agreement with ``exact'' computer simulations of the same model force laws. In addition, for dilute ionic solutions, the ion-oxygen and ion-hydrogen coordination numbers display both the physically correct stoichiometry and good agreement with earlier simulations. These results represent a measurable improvement over both a previous HNC solution of the central force model and the ex-RISM integral equation solutions for the TIPS and other rigid molecule models of water.

  14. Development of highly accurate approximate scheme for computing the charge transfer integral

    SciTech Connect

    Pershin, Anton; Szalay, Péter G.

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the “exact” scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the “exact” calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.

  15. Multivariate numerical integration via fluctuationlessness theorem: Case study

    NASA Astrophysics Data System (ADS)

    Baykara, N. A.; Gürvit, Ercan

    2017-01-01

    In this work we come up with the statement of the Fluctuationlessness theorem recently conjectured and proven by M. Demiralp and its application to numerical integration of univariate functions by restructuring the Taylor expansion with explicit remainder term. The Fluctuationlessness theorem is stated. Following this step an orthonormal basis set is formed and the necessary formulae for calculating the coefficients of the three term recursion formula are constructed. Then for multivariate numerical integration, instead of dealing with a single formula for multiple remainder terms, a new approach that is already mentioned for bivariate functions is taken into consideration. At every step of a multivariate integration one variable is considered and the others are held constant. In such a way, this gives us the possibility to get rid of the complexity of calculations. The trivariate case is taken into account and its generalization is step by step explained. At the final stage implementations are done for some trivariate functions and the results are tabulated together with the implementation times.

  16. Accurate integration over atomic regions bounded by zero-flux surfaces.

    PubMed

    Polestshuk, Pavel M

    2013-01-30

    The approach for the integration over a region covered by zero-flux surface is described. This approach based on the surface triangulation technique is efficiently realized in a newly developed program TWOE. The elaborated method is tested on several atomic properties including the source function. TWOE results are compared with those produced by using well-known existing programs. Absolute errors in computed atomic properties are shown to range usually from 10(-6) to 10(-5) au. The demonstrative examples prove that present realization has perfect convergence of atomic properties with increasing size of angular grid and allows to obtain highly accurate data even in the most difficult cases. It is believed that the developed program can be bridgehead that allows to implement atomic partitioning of any desired molecular property with high accuracy.

  17. Singularity Preserving Numerical Methods for Boundary Integral Equations

    NASA Technical Reports Server (NTRS)

    Kaneko, Hideaki (Principal Investigator)

    1996-01-01

    In the past twelve months (May 8, 1995 - May 8, 1996), under the cooperative agreement with Division of Multidisciplinary Optimization at NASA Langley, we have accomplished the following five projects: a note on the finite element method with singular basis functions; numerical quadrature for weakly singular integrals; superconvergence of degenerate kernel method; superconvergence of the iterated collocation method for Hammersteion equations; and singularity preserving Galerkin method for Hammerstein equations with logarithmic kernel. This final report consists of five papers describing these projects. Each project is preceeded by a brief abstract.

  18. Gauge Drift in Numerical Integrations of the Lagrange Planetary Equations

    NASA Astrophysics Data System (ADS)

    Murison, M. A.; Efroimsky, M.

    2003-08-01

    Efroimsky (2002) and Newman & Efroimsky (2003) recognized that the Lagrange and Delaunay planetary equations of celestial mechanics may be generalized to allow transformations analogous to the familiar gauge transformations in electrodynamics. As usually presented, the Lagrange equations, which are derived by the method of variation of parameters (invented by Euler and Lagrange for this very purpose), assume the Lagrange constraint, whereby a certain combination of parameter time derivatives is arbitrarily equated to zero. This particular constraint ensures an osculating orbit that is unique. The transformation of the description, as given by the (time-varying) osculating elements, into that given by the Cartesian coordinates and velocities is invertible. Relaxing the constraint enables one to substitute instead an arbitrary gauge function. This breaks the uniqueness and invertibility between the orbit instantaneously described by the orbital elements and the position and velocity components (i.e., many different orbits, precessing at different rates, can at a given instant share the same physical position and physical velocity through space). However, the orbit described by the (varying) orbital elements obeying a different gauge is no longer osculating. In numerical calculations that integrate the traditional Lagrange and Delaunay equations, even starting off in a certain (say, Lagrange's) gauge, some fraction of the numerical errors will, nevertheless, diffuse into violation of the chosen constraint. This results in an unintended ``gauge drift''. Geometrically, numerical errors cause the trajectory in phase space to leave the gauge-defined submanifold to which the motion was constrained, so that it is then moving on a different submanifold. The method of Lagrange multipliers can be utilized to return the motion to the original submanifold (e.g., Nacozy 1971, Murison 1989). Alternatively, the accumulated gauge drift may be compensated by a gauge transformation

  19. Multistep integration formulas for the numerical integration of the satellite problem

    NASA Technical Reports Server (NTRS)

    Lundberg, J. B.; Tapley, B. D.

    1981-01-01

    The use of two Class 2/fixed mesh/fixed order/multistep integration packages of the PECE type for the numerical integration of the second order, nonlinear, ordinary differential equation of the satellite orbit problem. These two methods are referred to as the general and the second sum formulations. The derivation of the basic equations which characterize each formulation and the role of the basic equations in the PECE algorithm are discussed. Possible starting procedures are examined which may be used to supply the initial set of values required by the fixed mesh/multistep integrators. The results of the general and second sum integrators are compared to the results of various fixed step and variable step integrators.

  20. Comparison of four stable numerical methods for Abel's integral equation

    NASA Technical Reports Server (NTRS)

    Murio, Diego A.; Mejia, Carlos E.

    1991-01-01

    The 3-D image reconstruction from cone-beam projections in computerized tomography leads naturally, in the case of radial symmetry, to the study of Abel-type integral equations. If the experimental information is obtained from measured data, on a discrete set of points, special methods are needed in order to restore continuity with respect to the data. A new combined Regularized-Adjoint-Conjugate Gradient algorithm, together with two different implementations of the Mollification Method (one based on a data filtering technique and the other on the mollification of the kernal function) and a regularization by truncation method (initially proposed for 2-D ray sample schemes and more recently extended to 3-D cone-beam image reconstruction) are extensively tested and compared for accuracy and numerical stability as functions of the level of noise in the data.

  1. Integrated numerical prediction of atomization process of liquid hydrogen jet

    NASA Astrophysics Data System (ADS)

    Ishimoto, Jun; Ohira, Katsuhide; Okabayashi, Kazuki; Chitose, Keiko

    2008-05-01

    The 3-D structure of the liquid atomization behavior of an LH jet flow through a pinhole nozzle is numerically investigated and visualized by a new type of integrated simulation technique. The present computational fluid dynamics (CFD) analysis focuses on the thermodynamic effect on the consecutive breakup of a cryogenic liquid column, the formation of a liquid film, and the generation of droplets in the outlet section of the pinhole nozzle. Utilizing the governing equations for a high-speed turbulent cryogenic jet flow through a pinhole nozzle based on the thermal nonequilibrium LES-VOF model in conjunction with the CSF model, an integrated parallel computation is performed to clarify the detailed atomization process of a high-speed LH2 jet flow through a pinhole nozzle and to acquire data, which is difficult to confirm by experiment, such as atomization length, liquid core shape, droplet-size distribution, spray angle, droplet velocity profiles, and thermal field surrounding the atomizing jet flow. According to the present computation, the cryogenic atomization rate and the LH2 droplets-gas two-phase flow characteristics are found to be controlled by the turbulence perturbation upstream of the pinhole nozzle, hydrodynamic instabilities at the gas-liquid interface and shear stress between the liquid core and the periphery of the LH2 jet. Furthermore, calculation of the effect of cryogenic atomization on the jet thermal field shows that such atomization extensively enhances the thermal diffusion surrounding the LH2 jet flow.

  2. A fast numerical solution of scattering by a cylinder: Spectral method for the boundary integral equations

    NASA Technical Reports Server (NTRS)

    Hu, Fang Q.

    1994-01-01

    It is known that the exact analytic solutions of wave scattering by a circular cylinder, when they exist, are not in a closed form but in infinite series which converges slowly for high frequency waves. In this paper, we present a fast number solution for the scattering problem in which the boundary integral equations, reformulated from the Helmholtz equation, are solved using a Fourier spectral method. It is shown that the special geometry considered here allows the implementation of the spectral method to be simple and very efficient. The present method differs from previous approaches in that the singularities of the integral kernels are removed and dealt with accurately. The proposed method preserves the spectral accuracy and is shown to have an exponential rate of convergence. Aspects of efficient implementation using FFT are discussed. Moreover, the boundary integral equations of combined single and double-layer representation are used in the present paper. This ensures the uniqueness of the numerical solution for the scattering problem at all frequencies. Although a strongly singular kernel is encountered for the Neumann boundary conditions, we show that the hypersingularity can be handled easily in the spectral method. Numerical examples that demonstrate the validity of the method are also presented.

  3. New Techniques for Simulation of Ion Implantation by Numerical Integration of Boltzmann Transport Equation

    NASA Astrophysics Data System (ADS)

    Wang, Shyh-Wei; Guo, Shuang-Fa

    1998-01-01

    New techniques for more accurate and efficient simulation of ion implantations by a stepwise numerical integration of the Boltzmann transport equation (BTE) have been developed in this work. Instead of using uniform energy grid, a non-uniform grid is employed to construct the momentum distribution matrix. A more accurate simulation result is obtained for heavy ions implanted into silicon. In the same time, rather than utilizing the conventional Lindhard, Nielsen and Schoitt (LNS) approximation, an exact evaluation of the integrals involving the nuclear differential scattering cross-section (dσn=2πp dp) is proposed. The impact parameter p as a function of ion energy E and scattering angle φ is obtained by solving the magic formula iteratively and an interpolation techniques is devised during the simulation process. The simulation time using exact evaluation is about 3.5 times faster than that using the Littmark and Ziegler (LZ) spline fitted cross-section function for phosphorus implantation into silicon.

  4. Path integrals for Fokker-Planck dynamics with singular diffusion: Accurate factorization for the time evolution operator

    NASA Astrophysics Data System (ADS)

    Drozdov, Alexander N.; Talkner, Peter

    1998-08-01

    Fokker-Planck processes with a singular diffusion matrix are quite frequently met in Physics and Chemistry. For a long time the resulting noninvertability of the diffusion matrix has been looked as a serious obstacle for treating these Fokker-Planck equations by various powerful numerical methods of quantum and statistical mechanics. In this paper, a path-integral method is presented that takes advantage of the singularity of the diffusion matrix and allows one to solve such problems in a simple and economic way. The basic idea is to split the Fokker-Planck equation into one of a linear system and an anharmonic correction and then to employ a symmetric decomposition of the short time propagator, which is exact up to a high order in the time step. Just because of the singularity of the diffusion matrix, the factors of the resulting product formula consist of well behaved propagators. In this way one obtains a highly accurate propagation scheme, which is simultaneously fast, stable, and computationally simple. Because it allows much larger time steps, it is more efficient than the standard propagation scheme based on the Trotter splitting formula. The proposed method is tested for Brownian motion in different types of potentials. For a harmonic potential we compare to the known analytic results. For a symmetric double well potential we determine the transition rates between the two wells for different friction strengths and compare them with the crossover theories of Mel'nikov and Meshkov and Pollak, Grabert, and Hänggi. Using a properly defined energy loss of the deterministic particle dynamics, we obtain excellent agreement. The methodology is outlined for a large class of processes defined by generalized Langevin equations and processes driven by colored noise.

  5. Carbon Dioxide Dispersion in the Combustion Integrated Rack Simulated Numerically

    NASA Technical Reports Server (NTRS)

    Wu, Ming-Shin; Ruff, Gary A.

    2004-01-01

    When discharged into an International Space Station (ISS) payload rack, a carbon dioxide (CO2) portable fire extinguisher (PFE) must extinguish a fire by decreasing the oxygen in the rack by 50 percent within 60 sec. The length of time needed for this oxygen reduction throughout the rack and the length of time that the CO2 concentration remains high enough to prevent the fire from reigniting is important when determining the effectiveness of the response and postfire procedures. Furthermore, in the absence of gravity, the local flow velocity can make the difference between a fire that spreads rapidly and one that self-extinguishes after ignition. A numerical simulation of the discharge of CO2 from PFE into the Combustion Integrated Rack (CIR) in microgravity was performed to obtain the local velocity and CO2 concentration. The complicated flow field around the PFE nozzle exits was modeled by sources of equivalent mass and momentum flux at a location downstream of the nozzle. The time for the concentration of CO2 to reach a level that would extinguish a fire anywhere in the rack was determined using the Fire Dynamics Simulator (FDS), a computational fluid dynamics code developed by the National Institute of Standards and Technology specifically to evaluate the development of a fire and smoke transport. The simulation shows that CO2, as well as any smoke and combustion gases produced by a fire, would be discharged into the ISS cabin through the resource utility panel at the bottom of the rack. These simulations will be validated by comparing the results with velocity and CO2 concentration measurements obtained during the fire suppression system verification tests conducted on the CIR in March 2003. Once these numerical simulations are validated, portions of the ISS labs and living areas will be modeled to determine the local flow conditions before, during, and after a fire event. These simulations can yield specific information about how long it takes for smoke and

  6. Applying integrals of motion to the numerical solution of differential equations

    NASA Technical Reports Server (NTRS)

    Vezewski, D. J.

    1980-01-01

    A method is developed for using the integrals of systems of nonlinear, ordinary, differential equations in a numerical integration process to control the local errors in these integrals and reduce the global errors of the solution. The method is general and can be applied to either scalar or vector integrals. A number of example problems, with accompanying numerical results, are used to verify the analysis and support the conjecture of global error reduction.

  7. Applying integrals of motion to the numerical solution of differential equations

    NASA Technical Reports Server (NTRS)

    Jezewski, D. J.

    1979-01-01

    A method is developed for using the integrals of systems of nonlinear, ordinary differential equations in a numerical integration process to control the local errors in these integrals and reduce the global errors of the solution. The method is general and can be applied to either scaler or vector integrals. A number of example problems, with accompanying numerical results, are used to verify the analysis and support the conjecture of global error reduction.

  8. Integrating Numerical Computation into the Modeling Instruction Curriculum

    ERIC Educational Resources Information Center

    Caballero, Marcos D.; Burk, John B.; Aiken, John M.; Thoms, Brian D.; Douglas, Scott S.; Scanlon, Erin M.; Schatz, Michael F.

    2014-01-01

    Numerical computation (the use of a computer to solve, simulate, or visualize a physical problem) has fundamentally changed the way scientific research is done. Systems that are too difficult to solve in closed form are probed using computation. Experiments that are impossible to perform in the laboratory are studied numerically. Consequently, in…

  9. Numerical Integration with GeoGebra in High School

    ERIC Educational Resources Information Center

    Herceg, Dorde; Herceg, Dragoslav

    2010-01-01

    The concept of definite integral is almost always introduced as the Riemann integral, which is defined in terms of the Riemann sum, and its geometric interpretation. This definition is hard to understand for high school students. With the aid of mathematical software for visualisation and computation of approximate integrals, the notion of…

  10. A novel class of highly efficient and accurate time-integrators in nonlinear computational mechanics

    NASA Astrophysics Data System (ADS)

    Wang, Xuechuan; Atluri, Satya N.

    2017-01-01

    A new class of time-integrators is presented for strongly nonlinear dynamical systems. These algorithms are far superior to the currently common time integrators in computational efficiency and accuracy. These three algorithms are based on a local variational iteration method applied over a finite interval of time. By using Chebyshev polynomials as trial functions and Dirac-Delta functions as the test functions over the finite time interval, the three algorithms are developed into three different discrete time-integrators through the collocation method. These time integrators are labeled as Chebyshev local iterative collocation methods. Through examples of the forced Duffing oscillator, the Lorenz system, and the multiple coupled Duffing equations (which arise as semi-discrete equations for beams, plates and shells undergoing large deformations), it is shown that the new algorithms are far superior to the 4th order Runge-Kutta and ODE45 of MATLAB, in predicting the chaotic responses of strongly nonlinear dynamical systems.

  11. A simple and accurate algorithm for path integral molecular dynamics with the Langevin thermostat.

    PubMed

    Liu, Jian; Li, Dezhang; Liu, Xinzijian

    2016-07-14

    We introduce a novel simple algorithm for thermostatting path integral molecular dynamics (PIMD) with the Langevin equation. The staging transformation of path integral beads is employed for demonstration. The optimum friction coefficients for the staging modes in the free particle limit are used for all systems. In comparison to the path integral Langevin equation thermostat, the new algorithm exploits a different order of splitting for the phase space propagator associated to the Langevin equation. While the error analysis is made for both algorithms, they are also employed in the PIMD simulations of three realistic systems (the H2O molecule, liquid para-hydrogen, and liquid water) for comparison. It is shown that the new thermostat increases the time interval of PIMD by a factor of 4-6 or more for achieving the same accuracy. In addition, the supplementary material shows the error analysis made for the algorithms when the normal-mode transformation of path integral beads is used.

  12. A simple and accurate algorithm for path integral molecular dynamics with the Langevin thermostat

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Li, Dezhang; Liu, Xinzijian

    2016-07-01

    We introduce a novel simple algorithm for thermostatting path integral molecular dynamics (PIMD) with the Langevin equation. The staging transformation of path integral beads is employed for demonstration. The optimum friction coefficients for the staging modes in the free particle limit are used for all systems. In comparison to the path integral Langevin equation thermostat, the new algorithm exploits a different order of splitting for the phase space propagator associated to the Langevin equation. While the error analysis is made for both algorithms, they are also employed in the PIMD simulations of three realistic systems (the H2O molecule, liquid para-hydrogen, and liquid water) for comparison. It is shown that the new thermostat increases the time interval of PIMD by a factor of 4-6 or more for achieving the same accuracy. In addition, the supplementary material shows the error analysis made for the algorithms when the normal-mode transformation of path integral beads is used.

  13. An accurate method for evaluating the kernel of the integral equation relating lift to downwash in unsteady potential flow

    NASA Technical Reports Server (NTRS)

    Desmarais, R. N.

    1982-01-01

    The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.

  14. Numerical parameter constraints for accurate PIC-DSMC simulation of breakdown from arc initiation to stable arcs

    NASA Astrophysics Data System (ADS)

    Moore, Christopher; Hopkins, Matthew; Moore, Stan; Boerner, Jeremiah; Cartwright, Keith

    2015-09-01

    Simulation of breakdown is important for understanding and designing a variety of applications such as mitigating undesirable discharge events. Such simulations need to be accurate through early time arc initiation to late time stable arc behavior. Here we examine constraints on the timestep and mesh size required for arc simulations using the particle-in-cell (PIC) method with direct simulation Monte Carlo (DMSC) collisions. Accurate simulation of electron avalanche across a fixed voltage drop and constant neutral density (reduced field of 1000 Td) was found to require a timestep ~ 1/100 of the mean time between collisions and a mesh size ~ 1/25 the mean free path. These constraints are much smaller than the typical PIC-DSMC requirements for timestep and mesh size. Both constraints are related to the fact that charged particles are accelerated by the external field. Thus gradients in the electron energy distribution function can exist at scales smaller than the mean free path and these must be resolved by the mesh size for accurate collision rates. Additionally, the timestep must be small enough that the particle energy change due to the fields be small in order to capture gradients in the cross sections versus energy. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  15. Multidimensional Genome-wide Analyses Show Accurate FVIII Integration by ZFN in Primary Human Cells

    PubMed Central

    Sivalingam, Jaichandran; Kenanov, Dimitar; Han, Hao; Nirmal, Ajit Johnson; Ng, Wai Har; Lee, Sze Sing; Masilamani, Jeyakumar; Phan, Toan Thang; Maurer-Stroh, Sebastian; Kon, Oi Lian

    2016-01-01

    Costly coagulation factor VIII (FVIII) replacement therapy is a barrier to optimal clinical management of hemophilia A. Therapy using FVIII-secreting autologous primary cells is potentially efficacious and more affordable. Zinc finger nucleases (ZFN) mediate transgene integration into the AAVS1 locus but comprehensive evaluation of off-target genome effects is currently lacking. In light of serious adverse effects in clinical trials which employed genome-integrating viral vectors, this study evaluated potential genotoxicity of ZFN-mediated transgenesis using different techniques. We employed deep sequencing of predicted off-target sites, copy number analysis, whole-genome sequencing, and RNA-seq in primary human umbilical cord-lining epithelial cells (CLECs) with AAVS1 ZFN-mediated FVIII transgene integration. We combined molecular features to enhance the accuracy and activity of ZFN-mediated transgenesis. Our data showed a low frequency of ZFN-associated indels, no detectable off-target transgene integrations or chromosomal rearrangements. ZFN-modified CLECs had very few dysregulated transcripts and no evidence of activated oncogenic pathways. We also showed AAVS1 ZFN activity and durable FVIII transgene secretion in primary human dermal fibroblasts, bone marrow- and adipose tissue-derived stromal cells. Our study suggests that, with close attention to the molecular design of genome-modifying constructs, AAVS1 ZFN-mediated FVIII integration in several primary human cell types may be safe and efficacious. PMID:26689265

  16. Sediment Ecosystem Assessment Protocol (SEAP): An Accurate and Integrated Weight-of-Evidence Based System

    DTIC Science & Technology

    2011-01-01

    Podegracz, Kyle Miller, and Robert Beltran. We are also very grateful for the logistical support received from numerous Navy personnel and contractors...Contam Toxicol. 54(1):44-56. His E, Robert R, and Dinet A, 1989. Combined effects of temperature and salinity on fed and starved larvae of the...system (Lampert et al. in review). The PDMS fibers used in this study were FG 230/210 fibers (Fiber Guide Industries, Stirling , NJ), and had a 210 µm

  17. Experimental analysis and numerical modeling of mollusk shells as a three dimensional integrated volume.

    PubMed

    Faghih Shojaei, M; Mohammadi, V; Rajabi, H; Darvizeh, A

    2012-12-01

    In this paper, a new numerical technique is presented to accurately model the geometrical and mechanical features of mollusk shells as a three dimensional (3D) integrated volume. For this purpose, the Newton method is used to solve the nonlinear equations of shell surfaces. The points of intersection on the shell surface are identified and the extra interior parts are removed. Meshing process is accomplished with respect to the coordinate of each point of intersection. The final 3D generated mesh models perfectly describe the spatial configuration of the mollusk shells. Moreover, the computational model perfectly matches with the actual interior geometry of the shells as well as their exterior architecture. The direct generation technique is employed to generate a 3D finite element (FE) model in ANSYS 11. X-ray images are taken to show the close similarity of the interior geometry of the models and the actual samples. A scanning electron microscope (SEM) is used to provide information on the microstructure of the shells. In addition, a set of compression tests were performed on gastropod shell specimens to obtain their ultimate compressive strength. A close agreement between experimental data and the relevant numerical results is demonstrated.

  18. Multi-omics integration accurately predicts cellular state in unexplored conditions for Escherichia coli

    PubMed Central

    Kim, Minseung; Rai, Navneet; Zorraquino, Violeta; Tagkopoulos, Ilias

    2016-01-01

    A significant obstacle in training predictive cell models is the lack of integrated data sources. We develop semi-supervised normalization pipelines and perform experimental characterization (growth, transcriptional, proteome) to create Ecomics, a consistent, quality-controlled multi-omics compendium for Escherichia coli with cohesive meta-data information. We then use this resource to train a multi-scale model that integrates four omics layers to predict genome-wide concentrations and growth dynamics. The genetic and environmental ontology reconstructed from the omics data is substantially different and complementary to the genetic and chemical ontologies. The integration of different layers confers an incremental increase in the prediction performance, as does the information about the known gene regulatory and protein-protein interactions. The predictive performance of the model ranges from 0.54 to 0.87 for the various omics layers, which far exceeds various baselines. This work provides an integrative framework of omics-driven predictive modelling that is broadly applicable to guide biological discovery. PMID:27713404

  19. Towards more accurate life cycle risk management through integration of DDP and PRA

    NASA Technical Reports Server (NTRS)

    Cornford, Steven L.; Paulos, Todd; Meshkat, Leila; Feather, Martin

    2003-01-01

    The focus of this paper is on the integration of PRA and DDP. The intent is twofold: to extend risk-based decision though more of the lifecycle, and to lead to improved risk modeling (hence better informed decision making) wherever it is applied, most especially in the early phases as designs begin to mature.

  20. Can a numerically stable subgrid-scale model for turbulent flow computation be ideally accurate?: a preliminary theoretical study for the Gaussian filtered Navier-Stokes equations.

    PubMed

    Ida, Masato; Taniguchi, Nobuyuki

    2003-09-01

    This paper introduces a candidate for the origin of the numerical instabilities in large eddy simulation repeatedly observed in academic and practical industrial flow computations. Without resorting to any subgrid-scale modeling, but based on a simple assumption regarding the streamwise component of flow velocity, it is shown theoretically that in a channel-flow computation, the application of the Gaussian filtering to the incompressible Navier-Stokes equations yields a numerically unstable term, a cross-derivative term, which is similar to one appearing in the Gaussian filtered Vlasov equation derived by Klimas [J. Comput. Phys. 68, 202 (1987)] and also to one derived recently by Kobayashi and Shimomura [Phys. Fluids 15, L29 (2003)] from the tensor-diffusivity subgrid-scale term in a dynamic mixed model. The present result predicts that not only the numerical methods and the subgrid-scale models employed but also only the applied filtering process can be a seed of this numerical instability. An investigation concerning the relationship between the turbulent energy scattering and the unstable term shows that the instability of the term does not necessarily represent the backscatter of kinetic energy which has been considered a possible origin of numerical instabilities in large eddy simulation. The present findings raise the question whether a numerically stable subgrid-scale model can be ideally accurate.

  1. Electrochemical valveless flow microsystems for ultra fast and accurate analysis of total isoflavones with integrated calibration.

    PubMed

    Blasco, Antonio Javier; Crevillén, Agustín González; de la Fuente, Pedro; González, María Cristina; Escarpa, Alberto

    2007-04-01

    A novel strategy integrating methodological calibration and analysis on board on a planar first-generation microfluidics system for the determination of total isoflavones in soy samples is proposed. The analytical strategy is conceptually proposed and successfully demonstrated on the basis of (i) the microchip design (with the possibility to use both reservoirs), (ii) the analytical characteristics of the developed method (statically zero intercept and excellent robustness between calibration slopes, RSDs < 5%), (iii) the irreversible electrochemical behaviour of isoflavone oxidation (no significant electrode fouling effect was observed between calibration and analysis runs) and (iv) the inherent versatility of the electrochemical end-channel configurations (possibility of use different pumping and detection media). Repeatability obtained in both standard (calibration) and real soy samples (analysis) with values of RSD less than 1% for the migration times indicated the stability of electroosmotic flow (EOF) during both integrated operations. The accuracy (an error of less than 6%) is demonstrated for the first time in these microsystems using a documented secondary standard from the Drug Master File (SW/1211/03) as reference material. Ultra fast calibration and analysis of total isoflavones in soy samples was integrated successfully employing 60 s each; enhancing notably the analytical performance of these microdevices with an important decrease in overall analysis times (less than 120 s) and with an increase in accuracy by a factor of 3.

  2. Study of time-accurate integration of the variable-density Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoyi; Pantano, Carlos

    2015-11-01

    We present several theoretical elements that affect time-consistent integration of the low-Mach number approximation of variable-density Navier-Stokes equations. The goal is for velocity, pressure, density, and scalars to achieve uniform order of accuracy, consistent with the time integrator being used. We show examples of second-order (using Crank-Nicolson and Adams-Bashforth) and third-order (using additive semi-implicit Runge-Kutta) uniform convergence with the proposed conceptual framework. Furthermore, the consistent approach can be extended to other time integrators. In addition, the method is formulated using approximate/incomplete factorization methods for easy incorporation in existing solvers. One of the observed benefits of the proposed approach is improved stability, even for large density difference, in comparison with other existing formulations. A linearized stability analysis is also carried out for some test problems to better understand the behavior of the approach. This work was supported in part by the Department of Energy, National Nuclear Security Administration, under award no. DE-NA0002382 and the California Institute of Technology.

  3. Enabling fast, stable and accurate peridynamic computations using multi-time-step integration

    SciTech Connect

    Lindsay, P.; Parks, M. L.; Prakash, A.

    2016-04-13

    Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, and computational cost is examined and several numerical examples are presented to corroborate the findings.

  4. Enabling fast, stable and accurate peridynamic computations using multi-time-step integration

    DOE PAGES

    Lindsay, P.; Parks, M. L.; Prakash, A.

    2016-04-13

    Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less

  5. Accurate Detection of Interaural Time Differences by a Population of Slowly Integrating Neurons

    NASA Astrophysics Data System (ADS)

    Vasilkov, Viacheslav A.; Tikidji-Hamburyan, Ruben A.

    2012-03-01

    For localization of a sound source, animals and humans process the microsecond interaural time differences of arriving sound waves. How nervous systems, consisting of elements with time constants of about and more than 1 ms, can reach such high precision is still an open question. In this Letter we present a hypothesis and show theoretical and computational evidence that a rather large population of slowly integrating neurons with inhibitory and excitatory inputs (EI neurons) can detect minute temporal disparities in input signals which are significantly less than any time constant in the system.

  6. Integrative subcellular proteomic analysis allows accurate prediction of human disease-causing genes

    PubMed Central

    Zhao, Li; Chen, Yiyun; Bajaj, Amol Onkar; Eblimit, Aiden; Xu, Mingchu; Soens, Zachry T.; Wang, Feng; Ge, Zhongqi; Jung, Sung Yun; He, Feng; Li, Yumei; Wensel, Theodore G.; Qin, Jun; Chen, Rui

    2016-01-01

    Proteomic profiling on subcellular fractions provides invaluable information regarding both protein abundance and subcellular localization. When integrated with other data sets, it can greatly enhance our ability to predict gene function genome-wide. In this study, we performed a comprehensive proteomic analysis on the light-sensing compartment of photoreceptors called the outer segment (OS). By comparing with the protein profile obtained from the retina tissue depleted of OS, an enrichment score for each protein is calculated to quantify protein subcellular localization, and 84% accuracy is achieved compared with experimental data. By integrating the protein OS enrichment score, the protein abundance, and the retina transcriptome, the probability of a gene playing an essential function in photoreceptor cells is derived with high specificity and sensitivity. As a result, a list of genes that will likely result in human retinal disease when mutated was identified and validated by previous literature and/or animal model studies. Therefore, this new methodology demonstrates the synergy of combining subcellular fractionation proteomics with other omics data sets and is generally applicable to other tissues and diseases. PMID:26912414

  7. Accurate and efficient integration for molecular dynamics simulations at constant temperature and pressure.

    PubMed

    Lippert, Ross A; Predescu, Cristian; Ierardi, Douglas J; Mackenzie, Kenneth M; Eastwood, Michael P; Dror, Ron O; Shaw, David E

    2013-10-28

    In molecular dynamics simulations, control over temperature and pressure is typically achieved by augmenting the original system with additional dynamical variables to create a thermostat and a barostat, respectively. These variables generally evolve on timescales much longer than those of particle motion, but typical integrator implementations update the additional variables along with the particle positions and momenta at each time step. We present a framework that replaces the traditional integration procedure with separate barostat, thermostat, and Newtonian particle motion updates, allowing thermostat and barostat updates to be applied infrequently. Such infrequent updates provide a particularly substantial performance advantage for simulations parallelized across many computer processors, because thermostat and barostat updates typically require communication among all processors. Infrequent updates can also improve accuracy by alleviating certain sources of error associated with limited-precision arithmetic. In addition, separating the barostat, thermostat, and particle motion update steps reduces certain truncation errors, bringing the time-average pressure closer to its target value. Finally, this framework, which we have implemented on both general-purpose and special-purpose hardware, reduces software complexity and improves software modularity.

  8. An integrative variant analysis pipeline for accurate genotype/haplotype inference in population NGS data.

    PubMed

    Wang, Yi; Lu, James; Yu, Jin; Gibbs, Richard A; Yu, Fuli

    2013-05-01

    Next-generation sequencing is a powerful approach for discovering genetic variation. Sensitive variant calling and haplotype inference from population sequencing data remain challenging. We describe methods for high-quality discovery, genotyping, and phasing of SNPs for low-coverage (approximately 5×) sequencing of populations, implemented in a pipeline called SNPTools. Our pipeline contains several innovations that specifically address challenges caused by low-coverage population sequencing: (1) effective base depth (EBD), a nonparametric statistic that enables more accurate statistical modeling of sequencing data; (2) variance ratio scoring, a variance-based statistic that discovers polymorphic loci with high sensitivity and specificity; and (3) BAM-specific binomial mixture modeling (BBMM), a clustering algorithm that generates robust genotype likelihoods from heterogeneous sequencing data. Last, we develop an imputation engine that refines raw genotype likelihoods to produce high-quality phased genotypes/haplotypes. Designed for large population studies, SNPTools' input/output (I/O) and storage aware design leads to improved computing performance on large sequencing data sets. We apply SNPTools to the International 1000 Genomes Project (1000G) Phase 1 low-coverage data set and obtain genotyping accuracy comparable to that of SNP microarray.

  9. Integration of an intensified charge-coupled device (ICCD) camera for accurate spectroscopic measurements.

    PubMed

    Peláez, Ramón Javier; Mar, Santiago; Aparicio, Juan Antonio; Belmonte, María Teresa

    2012-08-01

    Intensified charge-coupled devices (ICCD) are used in a great variety of spectroscopic applications, some of them requiring high sensitivity and spectral resolution. The setup, configuration, and featuring of these cameras are fundamental issues in order to acquire high quality spectra. In this work a critical assessment of these detectors is performed and the specific configuration, the optical alignment, featuring, and the dark and shot noise are described and analyzed. Spatial response of the detector usually shows a significant lack of spatial homogeneity and a map of interferences may appear in certain ranges of wavelengths, which damages the quality of the recorded spectra. In this work the spectral resolution and the spatial and spectral sensitivity are also studied. The analysis of the dark current reveals the existence of a smooth but clear spatial dependence. As a final conclusion, the spectra registered with the spectrometer equipped with our ICCD camera allow us to explore and measure accurately spectral line shapes emitted by pulsed plasmas in the visible range and particularly in the ultraviolet (UV) range.

  10. Numerical integration of population models satisfying conservation laws: NSFD methods.

    PubMed

    Mickens, Ronald E

    2007-10-01

    Population models arising in ecology, epidemiology and mathematical biology may involve a conservation law, i.e. the total population is constant. In addition to these cases, other situations may occur for which the total population, asymptotically in time, approach a constant value. Since it is rarely the situation that the equations of motion can be analytically solved to obtain exact solutions, it follows that numerical techniques are needed to provide solutions. However, numerical procedures are only valid if they can reproduce fundamental properties of the differential equations modeling the phenomena of interest. We show that for population models, involving a dynamical conservation law the use of nonstandard finite difference (NSFD) methods allows the construction of discretization schemes such that they are dynamically consistent (DC) with the original differential equations. The paper will briefly discuss the NSFD methodology, the concept of DC, and illustrate their application to specific problems for population models.

  11. Impact of numerical integration on gas curtain simulations

    SciTech Connect

    Rider, W.; Kamm, J.

    2000-11-01

    In recent years, we have presented a less than glowing experimental comparison of hydrodynamic codes with the gas curtain experiment (e.g., Kamm et al. 1999a). Here, we discuss the manner in which the details of the hydrodynamic integration techniques may conspire to produce poor results. This also includes some progress in improving the results and agreement with experimental results. Because our comparison was conducted on the details of the experimental images (i.e., their detailed structural information), our results do not conflict with previously published results of good agreement with Richtmyer-Meshkov instabilities based on the integral scale of mixing. New experimental and analysis techniques are also discussed.

  12. Numerical simulation of scattering of acoustic waves by inelastic bodies using hypersingular boundary integral equation

    SciTech Connect

    Daeva, S.G.; Setukha, A.V.

    2015-03-10

    A numerical method for solving a problem of diffraction of acoustic waves by system of solid and thin objects based on the reduction the problem to a boundary integral equation in which the integral is understood in the sense of finite Hadamard value is proposed. To solve this equation we applied piecewise constant approximations and collocation methods numerical scheme. The difference between the constructed scheme and earlier known is in obtaining approximate analytical expressions to appearing system of linear equations coefficients by separating the main part of the kernel integral operator. The proposed numerical scheme is tested on the solution of the model problem of diffraction of an acoustic wave by inelastic sphere.

  13. Advances in numerical solutions to integral equations in liquid state theory

    NASA Astrophysics Data System (ADS)

    Howard, Jesse J.

    Solvent effects play a vital role in the accurate description of the free energy profile for solution phase chemical and structural processes. The inclusion of solvent effects in any meaningful theoretical model however, has proven to be a formidable task. Generally, methods involving Poisson-Boltzmann (PB) theory and molecular dynamic (MD) simulations are used, but they either fail to accurately describe the solvent effects or require an exhaustive computation effort to overcome sampling problems. An alternative to these methods are the integral equations (IEs) of liquid state theory which have become more widely applicable due to recent advancements in the theory of interaction site fluids and the numerical methods to solve the equations. In this work a new numerical method is developed based on a Newton-type scheme coupled with Picard/MDIIS routines. To extend the range of these numerical methods to large-scale data systems, the size of the Jacobian is reduced using basis functions, and the Newton steps are calculated using a GMRes solver. The method is then applied to calculate solutions to the 3D reference interaction site model (RISM) IEs of statistical mechanics, which are derived from first principles, for a solute model of a pair of parallel graphene plates at various separations in pure water. The 3D IEs are then extended to electrostatic models using an exact treatment of the long-range Coulomb interactions for negatively charged walls and DNA duplexes in aqueous electrolyte solutions to calculate the density profiles and solution thermodynamics. It is found that the 3D-IEs provide a qualitative description of the density distributions of the solvent species when compared to MD results, but at a much reduced computational effort in comparison to MD simulations. The thermodynamics of the solvated systems are also qualitatively reproduced by the IE results. The findings of this work show the IEs to be a valuable tool for the study and prediction of

  14. Numerical Research of Airframe/Engine Integrative Hypersonic Vehicle

    DTIC Science & Technology

    2007-11-02

    paper, an engineering method and a finite volume method based on the center of grid are developed for preliminary research of interested integrative...development of hypersonic technology, advanced experimental, analytical and computational methods are being exploited in the design of hypersonic...configurations to obtain excellent aerodynamic characteristics[5]. Due to the limitation of test capabilities to model all the impossible flight conditions

  15. Simpson's Rule by Rectangles: A Numerical Approach to Integration.

    ERIC Educational Resources Information Center

    Powell, Martin

    1985-01-01

    Shows that Simpson's rule can be obtained as the average of three simple rectangular approximations and can therefore be introduced to students before they meet any calculus. In addition, the accuracy of the rule (which is for exact cubes) can be exploited to introduce the topic of integration. (JN)

  16. Integrated numerical methods for hypersonic aircraft cooling systems analysis

    NASA Technical Reports Server (NTRS)

    Petley, Dennis H.; Jones, Stuart C.; Dziedzic, William M.

    1992-01-01

    Numerical methods have been developed for the analysis of hypersonic aircraft cooling systems. A general purpose finite difference thermal analysis code is used to determine areas which must be cooled. Complex cooling networks of series and parallel flow can be analyzed using a finite difference computer program. Both internal fluid flow and heat transfer are analyzed, because increased heat flow causes a decrease in the flow of the coolant. The steady state solution is a successive point iterative method. The transient analysis uses implicit forward-backward differencing. Several examples of the use of the program in studies of hypersonic aircraft and rockets are provided.

  17. Accurate simulation of two-dimensional optical microcavities with uniquely solvable boundary integral equations and trigonometric Galerkin discretization.

    PubMed

    Boriskina, Svetlana V; Sewell, Phillip; Benson, Trevor M; Nosich, Alexander I

    2004-03-01

    A fast and accurate method is developed to compute the natural frequencies and scattering characteristics of arbitrary-shape two-dimensional dielectric resonators. The problem is formulated in terms of a uniquely solvable set of second-kind boundary integral equations and discretized by the Galerkin method with angular exponents as global test and trial functions. The log-singular term is extracted from one of the kernels, and closed-form expressions are derived for the main parts of all the integral operators. The resulting discrete scheme has a very high convergence rate. The method is used in the simulation of several optical microcavities for modern dense wavelength-division-multiplexed systems.

  18. iCut: an Integrative Cut Algorithm Enables Accurate Segmentation of Touching Cells.

    PubMed

    He, Yong; Gong, Hui; Xiong, Benyi; Xu, Xiaofeng; Li, Anan; Jiang, Tao; Sun, Qingtao; Wang, Simin; Luo, Qingming; Chen, Shangbin

    2015-07-14

    Individual cells play essential roles in the biological processes of the brain. The number of neurons changes during both normal development and disease progression. High-resolution imaging has made it possible to directly count cells. However, the automatic and precise segmentation of touching cells continues to be a major challenge for massive and highly complex datasets. Thus, an integrative cut (iCut) algorithm, which combines information regarding spatial location and intervening and concave contours with the established normalized cut, has been developed. iCut involves two key steps: (1) a weighting matrix is first constructed with the abovementioned information regarding the touching cells and (2) a normalized cut algorithm that uses the weighting matrix is implemented to separate the touching cells into isolated cells. This novel algorithm was evaluated using two types of data: the open SIMCEP benchmark dataset and our micro-optical imaging dataset from a Nissl-stained mouse brain. It has achieved a promising recall/precision of 91.2 ± 2.1%/94.1 ± 1.8% and 86.8 ± 4.1%/87.5 ± 5.7%, respectively, for the two datasets. As quantified using the harmonic mean of recall and precision, the accuracy of iCut is higher than that of some state-of-the-art algorithms. The better performance of this fully automated algorithm can benefit studies of brain cytoarchitecture.

  19. EZ-Rhizo: integrated software for the fast and accurate measurement of root system architecture.

    PubMed

    Armengaud, Patrick; Zambaux, Kevin; Hills, Adrian; Sulpice, Ronan; Pattison, Richard J; Blatt, Michael R; Amtmann, Anna

    2009-03-01

    The root system is essential for the growth and development of plants. In addition to anchoring the plant in the ground, it is the site of uptake of water and minerals from the soil. Plant root systems show an astonishing plasticity in their architecture, which allows for optimal exploitation of diverse soil structures and conditions. The signalling pathways that enable plants to sense and respond to changes in soil conditions, in particular nutrient supply, are a topic of intensive research, and root system architecture (RSA) is an important and obvious phenotypic output. At present, the quantitative description of RSA is labour intensive and time consuming, even using the currently available software, and the lack of a fast RSA measuring tool hampers forward and quantitative genetics studies. Here, we describe EZ-Rhizo: a Windows-integrated and semi-automated computer program designed to detect and quantify multiple RSA parameters from plants growing on a solid support medium. The method is non-invasive, enabling the user to follow RSA development over time. We have successfully applied EZ-Rhizo to evaluate natural variation in RSA across 23 Arabidopsis thaliana accessions, and have identified new RSA determinants as a basis for future quantitative trait locus (QTL) analysis.

  20. iCut: an Integrative Cut Algorithm Enables Accurate Segmentation of Touching Cells

    PubMed Central

    He, Yong; Gong, Hui; Xiong, Benyi; Xu, Xiaofeng; Li, Anan; Jiang, Tao; Sun, Qingtao; Wang, Simin; Luo, Qingming; Chen, Shangbin

    2015-01-01

    Individual cells play essential roles in the biological processes of the brain. The number of neurons changes during both normal development and disease progression. High-resolution imaging has made it possible to directly count cells. However, the automatic and precise segmentation of touching cells continues to be a major challenge for massive and highly complex datasets. Thus, an integrative cut (iCut) algorithm, which combines information regarding spatial location and intervening and concave contours with the established normalized cut, has been developed. iCut involves two key steps: (1) a weighting matrix is first constructed with the abovementioned information regarding the touching cells and (2) a normalized cut algorithm that uses the weighting matrix is implemented to separate the touching cells into isolated cells. This novel algorithm was evaluated using two types of data: the open SIMCEP benchmark dataset and our micro-optical imaging dataset from a Nissl-stained mouse brain. It has achieved a promising recall/precision of 91.2 ± 2.1%/94.1 ± 1.8% and 86.8 ± 4.1%/87.5 ± 5.7%, respectively, for the two datasets. As quantified using the harmonic mean of recall and precision, the accuracy of iCut is higher than that of some state-of-the-art algorithms. The better performance of this fully automated algorithm can benefit studies of brain cytoarchitecture. PMID:26168908

  1. Accurate measurement of optical properties of narrow leaves and conifer needles with a typical integrating sphere and spectroradiometer.

    PubMed

    Noda, Hibiki M; Motohka, Takeshi; Murakami, Kazutaka; Muraoka, Hiroyuki; Nasahara, Kenlo Nishida

    2013-10-01

    Accurate information on the optical properties (reflectance and transmittance spectra) of single leaves is important for an ecophysiological understanding of light use by leaves, radiative transfer models and remote sensing of terrestrial ecosystems. In general, leaf optical properties are measured with an integrating sphere and a spectroradiometer. However, this method is usually difficult to use with grass leaves and conifer needles because they are too narrow to cover the sample port of a typical integrating sphere. Although ways to measure the optical properties of narrow leaves have been suggested, they have problems. We propose a new measurement protocol and calculation algorithms. The protocol does not damage sample leaves and is valid for various types of leaves, including green and senescent. We tested our technique with leaves of Aucuba japonica, an evergreen broadleaved shrub, and compared the spectral data of whole leaves and narrow strips of the leaves. The reflectance and transmittance of the strips matched those of the whole leaves, indicating that our technique can accurately estimate the optical properties of narrow leaves. Tests of conifer needles confirmed the applicability.

  2. Numerical approximation of weakly singular integrals on a triangle

    NASA Astrophysics Data System (ADS)

    Serafini, Giada

    2016-10-01

    In this paper, we propose product cubature rules based on the polynomial approximation in order to evaluate the following integrals I (F ;y )= ∫TK (x ,y ) F (x )ω (x )d x , where x = (x1, x2), y = (y1, y2), K is a "weakly"singular or a "nearly"singular kernel, T the domain T is the triangle of vertices (0, 0), (0, 1), (1, 0), f is a given bivariate function defined on T and ω is a proper weight function.

  3. An Integrated Numerical Hydrodynamic Shallow Flow-Solute Transport Model for Urban Area

    NASA Astrophysics Data System (ADS)

    Alias, N. A.; Mohd Sidek, L.

    2016-03-01

    The rapidly changing on land profiles in the some urban areas in Malaysia led to the increasing of flood risk. Extensive developments on densely populated area and urbanization worsen the flood scenario. An early warning system is really important and the popular method is by numerically simulating the river and flood flows. There are lots of two-dimensional (2D) flood model predicting the flood level but in some circumstances, still it is difficult to resolve the river reach in a 2D manner. A systematic early warning system requires a precisely prediction of flow depth. Hence a reliable one-dimensional (1D) model that provides accurate description of the flow is essential. Research also aims to resolve some of raised issues such as the fate of pollutant in river reach by developing the integrated hydrodynamic shallow flow-solute transport model. Presented in this paper are results on flow prediction for Sungai Penchala and the convection-diffusion of solute transports simulated by the developed model.

  4. Fast-Fourier-transform based numerical integration method for the Rayleigh-Sommerfeld diffraction formula

    NASA Astrophysics Data System (ADS)

    Shen, Fabin; Wang, Anbo

    2006-02-01

    The numerical calculation of the Rayleigh-Sommerfeld diffraction integral is investigated. The implementation of a fast-Fourier-transform (FFT) based direct integration (FFT-DI) method is presented, and Simpson's rule is used to improve the calculation accuracy. The sampling interval, the size of the computation window, and their influence on numerical accuracy and on computational complexity are discussed for the FFT-DI and the FFT-based angular spectrum (FFT-AS) methods. The performance of the FFT-DI method is verified by numerical simulation and compared with that of the FFT-AS method.

  5. On the use of the line integral in the numerical treatment of conservative problems

    NASA Astrophysics Data System (ADS)

    Brugnano, Luigi; Iavernaro, Felice

    2016-06-01

    We sketch out the use of the line integral as a tool to devise numerical methods suitable for conservative and, in particular, Hamiltonian problems. The monograph [3] presents the fundamental theory on line integral methods and this short note aims at exploring some aspects and results emerging from their study.

  6. iPE-MMR: An integrated approach to accurately assign monoisotopic precursor masses to tandem mass spectrometric data

    PubMed Central

    Jung, Hee-Jung; Purvine, Samuel O.; Kim, Hokeun; Petyuk, Vladislav A.; Hyung, Seok-Won; Monroe, Matthew E.; Mun, Dong-Gi; Kim, Kyong-Chul; Park, Jong-Moon; Kim, Su-Jin; Tolic, Nikola; Slysz, Gordon W.; Moore, Ronald J.; Zhao, Rui; Adkins, Joshua N.; Anderson, Gordon A.; Lee, Hookeun; Camp, David G.; Yu, Myeong-Hee; Smith, Richard D.; Lee, Sang-Won

    2010-01-01

    Accurate assignment of monoisotopic precursor masses to tandem mass spectrometric (MS/MS) data is a fundamental and critically important step for successful peptide identifications in mass spectrometry based proteomics. Here we describe an integrated approach that combines three previously reported methods of treating MS/MS data for precursor mass refinement. This combined method, “integrated Post-Experiment Monoisotopic Mass Refinement” (iPE-MMR), integrates steps: 1) generation of refined MS/MS data by DeconMSn; 2) additional refinement of the resultant MS/MS data by a modified version of PE-MMR; 3) elimination of systematic errors of precursor masses using DtaRefinery. iPE-MMR is the first method that utilizes all MS information from multiple MS scans of a precursor ion including multiple charge states, in an MS scan, to determine precursor mass. By combining these methods, iPE-MMR increases sensitivity in peptide identification and provides increased accuracy when applied to complex high-throughput proteomics data. PMID:20863060

  7. Numerical solution of a class of integral equations arising in two-dimensional aerodynamics

    NASA Technical Reports Server (NTRS)

    Fromme, J.; Golberg, M. A.

    1978-01-01

    We consider the numerical solution of a class of integral equations arising in the determination of the compressible flow about a thin airfoil in a ventilated wind tunnel. The integral equations are of the first kind with kernels having a Cauchy singularity. Using appropriately chosen Hilbert spaces, it is shown that the kernel gives rise to a mapping which is the sum of a unitary operator and a compact operator. This allows the problem to be studied in terms of an equivalent integral equation of the second kind. A convergent numerical algorithm for its solution is derived by using Galerkin's method. It is shown that this algorithm is numerically equivalent to Bland's collocation method, which is then used as the method of computation. Extensive numerical calculations are presented establishing the validity of the theory.

  8. A novel, integrated PET-guided MRS technique resulting in more accurate initial diagnosis of high-grade glioma.

    PubMed

    Kim, Ellen S; Satter, Martin; Reed, Marilyn; Fadell, Ronald; Kardan, Arash

    2016-06-01

    Glioblastoma multiforme (GBM) is the most common and lethal malignant glioma in adults. Currently, the modality of choice for diagnosing brain tumor is high-resolution magnetic resonance imaging (MRI) with contrast, which provides anatomic detail and localization. Studies have demonstrated, however, that MRI may have limited utility in delineating the full tumor extent precisely. Studies suggest that MR spectroscopy (MRS) can also be used to distinguish high-grade from low-grade gliomas. However, due to operator dependent variables and the heterogeneous nature of gliomas, the potential for error in diagnostic accuracy with MRS is a concern. Positron emission tomography (PET) imaging with (11)C-methionine (MET) and (18)F-fluorodeoxyglucose (FDG) has been shown to add additional information with respect to tumor grade, extent, and prognosis based on the premise of biochemical changes preceding anatomic changes. Combined PET/MRS is a technique that integrates information from PET in guiding the location for the most accurate metabolic characterization of a lesion via MRS. We describe a case of glioblastoma multiforme in which MRS was initially non-diagnostic for malignancy, but when MRS was repeated with PET guidance, demonstrated elevated choline/N-acetylaspartate (Cho/NAA) ratio in the right parietal mass consistent with a high-grade malignancy. Stereotactic biopsy, followed by PET image-guided resection, confirmed the diagnosis of grade IV GBM. To our knowledge, this is the first reported case of an integrated PET/MRS technique for the voxel placement of MRS. Our findings suggest that integrated PET/MRS may potentially improve diagnostic accuracy in high-grade gliomas.

  9. Numerical method to solve Cauchy type singular integral equation with error bounds

    NASA Astrophysics Data System (ADS)

    Setia, Amit; Sharma, Vaishali; Liu, Yucheng

    2017-01-01

    Cauchy type singular integral equations with index zero naturally occur in the field of aerodynamics. Literature is very much developed for these equations and Chebyshevs polynomials are most frequently used to solve these integral equations. In this paper, a residual based Galerkins method has been proposed by using Legendre polynomial as basis functions to solve Cauchy singular integral equation of index zero. It converts the Cauchy singular integral equation into system of equations which can be easily solved. The test examples are given for illustration of proposed numerical method. Error bounds are derived as well as implemented in all the test examples.

  10. A comparison of the efficiency of numerical methods for integrating chemical kinetic rate equations

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    1984-01-01

    The efficiency of several algorithms used for numerical integration of stiff ordinary differential equations was compared. The methods examined included two general purpose codes EPISODE and LSODE and three codes (CHEMEQ, CREK1D and GCKP84) developed specifically to integrate chemical kinetic rate equations. The codes were applied to two test problems drawn from combustion kinetics. The comparisons show that LSODE is the fastest code available for the integration of combustion kinetic rate equations. It is shown that an iterative solution of the algebraic energy conservation equation to compute the temperature can be more efficient then evaluating the temperature by integrating its time-derivative.

  11. Orbit determination based on meteor observations using numerical integration of equations of motion

    NASA Astrophysics Data System (ADS)

    Dmitriev, Vasily; Lupovka, Valery; Gritsevich, Maria

    2015-11-01

    Recently, there has been a worldwide proliferation of instruments and networks dedicated to observing meteors, including airborne and future space-based monitoring systems . There has been a corresponding rapid rise in high quality data accumulating annually. In this paper, we present a method embodied in the open-source software program "Meteor Toolkit", which can effectively and accurately process these data in an automated mode and discover the pre-impact orbit and possibly the origin or parent body of a meteoroid or asteroid. The required input parameters are the topocentric pre-atmospheric velocity vector and the coordinates of the atmospheric entry point of the meteoroid, i.e. the beginning point of visual path of a meteor, in an Earth centered-Earth fixed coordinate system, the International Terrestrial Reference Frame (ITRF). Our method is based on strict coordinate transformation from the ITRF to an inertial reference frame and on numerical integration of the equations of motion for a perturbed two-body problem. Basic accelerations perturbing a meteoroid's orbit and their influence on the orbital elements are also studied and demonstrated. Our method is then compared with several published studies that utilized variations of a traditional analytical technique, the zenith attraction method, which corrects for the direction of the meteor's trajectory and its apparent velocity due to Earth's gravity. We then demonstrate the proposed technique on new observational data obtained from the Finnish Fireball Network (FFN) as well as on simulated data. In addition, we propose a method of analysis of error propagation, based on general rule of covariance transformation.

  12. Frequency responses and resolving power of numerical integration of sampled data

    NASA Astrophysics Data System (ADS)

    Yaroslavsky, L. P.; Moreno, A.; Campos, J.

    2005-04-01

    Methods of numerical integration of sampled data are compared in terms of their frequency responses and resolving power. Compared, theoretically and by numerical experiments, are trapezoidal, Simpson, Simpson-3/8 methods, method based on cubic spline data interpolation and Discrete Fourier Transform (DFT) based method. Boundary effects associated with DFT- based and spline-based methods are investigated and an improved Discrete Cosine Transform based method is suggested and shown to be superior to all other methods both in terms of approximation to the ideal continuous integrator and of the level of the boundary effects.

  13. Numerical integration of the stochastic Landau-Lifshitz-Gilbert equation in generic time-discretization schemes.

    PubMed

    Romá, Federico; Cugliandolo, Leticia F; Lozano, Gustavo S

    2014-08-01

    We introduce a numerical method to integrate the stochastic Landau-Lifshitz-Gilbert equation in spherical coordinates for generic discretization schemes. This method conserves the magnetization modulus and ensures the approach to equilibrium under the expected conditions. We test the algorithm on a benchmark problem: the dynamics of a uniformly magnetized ellipsoid. We investigate the influence of various parameters, and in particular, we analyze the efficiency of the numerical integration, in terms of the number of steps needed to reach a chosen long time with a given accuracy.

  14. Feasibility study of the numerical integration of shell equations using the field method

    NASA Technical Reports Server (NTRS)

    Cohen, G. A.

    1973-01-01

    The field method is developed for arbitrary open branch domains subjected to general linear boundary conditions. Although closed branches are within the scope of the method, they are not treated here. The numerical feasibility of the method has been demonstrated by implementing it in a computer program for the linear static analysis of open branch shells of revolution under asymmetric loads. For such problems the field method eliminates the well-known numerical problem of long subintervals associated with the rapid growth of extraneous solutions. Also, the method appears to execute significantly faster than other numerical integration methods.

  15. Accurate path integral molecular dynamics simulation of ab-initio water at near-zero added cost

    NASA Astrophysics Data System (ADS)

    Elton, Daniel; Fritz, Michelle; Soler, José; Fernandez-Serra, Marivi

    It is now established that nuclear quantum motion plays an important role in determining water's structure and dynamics. These effects are important to consider when evaluating DFT functionals and attempting to develop better ones for water. The standard way of treating nuclear quantum effects, path integral molecular dynamics (PIMD), multiplies the number of energy/force calculations by the number of beads, which is typically 32. Here we introduce a method whereby PIMD can be incorporated into a DFT molecular dynamics simulation at virtually zero cost. The method is based on the cluster (many body) expansion of the energy. We first subtract the DFT monomer energies, using a custom DFT-based monomer potential energy surface. The evolution of the PIMD beads is then performed using only the more-accurate Partridge-Schwenke monomer energy surface. The DFT calculations are done using the centroid positions. Various bead thermostats can be employed to speed up the sampling of the quantum ensemble. The method bears some resemblance to multiple timestep algorithms and other schemes used to speed up PIMD with classical force fields. We show that our method correctly captures some of key effects of nuclear quantum motion on both the structure and dynamics of water. We acknowledge support from DOE Award No. DE-FG02-09ER16052 (D.E.) and DOE Early Career Award No. DE-SC0003871 (M.V.F.S.).

  16. Nuclear Quantum Effects in Liquid Water: A Highly Accurate ab initio Path-Integral Molecular Dynamics Study

    NASA Astrophysics Data System (ADS)

    Distasio, Robert A., Jr.; Santra, Biswajit; Ko, Hsin-Yu; Car, Roberto

    2014-03-01

    In this work, we report highly accurate ab initio path-integral molecular dynamics (AI-PIMD) simulations on liquid water at ambient conditions utilizing the recently developed PBE0+vdW(SC) exchange-correlation functional, which accounts for exact exchange and a self-consistent pairwise treatment of van der Waals (vdW) or dispersion interactions, combined with nuclear quantum effects (via the colored-noise generalized Langevin equation). The importance of each of these effects in the theoretical prediction of the structure of liquid water will be demonstrated by a detailed comparative analysis of the predicted and experimental oxygen-oxygen (O-O), oxygen-hydrogen (O-H), and hydrogen-hydrogen (H-H) radial distribution functions as well as other structural properties. In addition, we will discuss the theoretically obtained proton momentum distribution, computed using the recently developed Feynman path formulation, in light of the experimental deep inelastic neutron scattering (DINS) measurements. DOE: DE-SC0008626, DOE: DE-SC0005180.

  17. Technical note: application of α-QSS to the numerical integration of kinetic equations in tropospheric chemistry

    NASA Astrophysics Data System (ADS)

    Liu, F.; Schaller, E.; Mott, D. R.

    2005-08-01

    A major task in many applications of atmospheric chemistry transport problems is the numerical integration of stiff systems of Ordinary Differential Equations (ODEs) describing the chemical transformations. A faster solver that is easier to couple to the other physics in the problem is still needed. The integration method, α-QSS, corresponding to the solver CHEMEQ2 aims at meeting the demands of a process-split, reacting-flow simulation (Mott 2000; Mott and Oran, 2001). However, this integrator has yet to be applied to the numerical integration of kinetic equations in tropospheric chemistry. A zero-dimensional (box) model is developed to test how well CHEMEQ2 works on the tropospheric chemistry equations. This paper presents the testing results. The reference chemical mechanisms herein used are Regional Atmospheric Chemistry Mechanism (RACM) (Stockwell et al., 1997) and its secondary lumped successor Regional Lumped Atmospheric Chemical Scheme (ReLACS) (Crassier et al., 2000). The box model is forced and initialized by the DRY scenarios of Protocol Ver. 2 developed by EUROTRAC (Poppe et al., 2001). The accuracy of CHEMEQ2 is evaluated by comparing the results to solutions obtained with VODE. This comparison is made with parameters of the error tolerance, relative difference with respect to VODE scheme, trade off between accuracy and efficiency, global time step for integration etc. The study based on the comparison concludes that the single-point α-QSS approach is fast and moderately accurate as well as easy to couple to reacting flow simulation models, which makes CHEMEQ2 one of the best candidates for three-dimensional atmospheric Chemistry Transport Modelling (CTM) studies. In addition the RACM mechanism may be replaced by ReLACS mechanism for tropospheric chemistry transport modelling. The testing results also imply that the accuracy for chemistry numerical simulations is highly different from species to species. Therefore ozone is not the good choice for

  18. An efficient exponential time integration method for the numerical solution of the shallow water equations on the sphere

    NASA Astrophysics Data System (ADS)

    Gaudreault, Stéphane; Pudykiewicz, Janusz A.

    2016-10-01

    The exponential propagation methods were applied in the past for accurate integration of the shallow water equations on the sphere. Despite obvious advantages related to the exact solution of the linear part of the system, their use for the solution of practical problems in geophysics has been limited because efficiency of the traditional algorithm for evaluating the exponential of Jacobian matrix is inadequate. In order to circumvent this limitation, we modify the existing scheme by using the Incomplete Orthogonalization Method instead of the Arnoldi iteration. We also propose a simple strategy to determine the initial size of the Krylov space using information from previous time instants. This strategy is ideally suited for the integration of fluid equations where the structure of the system Jacobian does not change rapidly between the subsequent time steps. A series of standard numerical tests performed with the shallow water model on a geodesic icosahedral grid shows that the new scheme achieves efficiency comparable to the semi-implicit methods. This fact, combined with the accuracy and the mass conservation of the exponential propagation scheme, makes the presented method a good candidate for solving many practical problems, including numerical weather prediction.

  19. Adaptive Numerical Integration for Item Response Theory. Research Report. ETS RR-07-06

    ERIC Educational Resources Information Center

    Antal, Tamás; Oranje, Andreas

    2007-01-01

    Well-known numerical integration methods are applied to item response theory (IRT) with special emphasis on the estimation of the latent regression model of NAEP [National Assessment of Educational Progress]. An argument is made that the Gauss-Hermite rule enhanced with Cholesky decomposition and normal approximation of the response likelihood is…

  20. On the stability of numerical integration routines for ordinary differential equations.

    NASA Technical Reports Server (NTRS)

    Glover, K.; Willems, J. C.

    1973-01-01

    Numerical integration methods for the solution of initial value problems for ordinary vector differential equations may be modelled as discrete time feedback systems. The stability criteria discovered in modern control theory are applied to these systems and criteria involving the routine, the step size and the differential equation are derived. Linear multistep, Runge-Kutta, and predictor-corrector methods are all investigated.

  1. Some numerical methods for integrating systems of first-order ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Clark, N. W.

    1969-01-01

    Report on numerical methods of integration includes the extrapolation methods of Bulirsch-Stoer and Neville. A comparison is made nith the Runge-Kutta and Adams-Moulton methods, and circumstances are discussed under which the extrapolation method may be preferred.

  2. Abstract Applets: A Method for Integrating Numerical Problem Solving into the Undergraduate Physics Curriculum

    SciTech Connect

    Peskin, Michael E

    2003-02-13

    In upper-division undergraduate physics courses, it is desirable to give numerical problem-solving exercises integrated naturally into weekly problem sets. I explain a method for doing this that makes use of the built-in class structure of the Java programming language. I also supply a Java class library that can assist instructors in writing programs of this type.

  3. A novel stress-accurate FE technology for highly non-linear analysis with incompressibility constraint. Application to the numerical simulation of the FSW process

    NASA Astrophysics Data System (ADS)

    Chiumenti, M.; Cervera, M.; Agelet de Saracibar, C.; Dialami, N.

    2013-05-01

    In this work a novel finite element technology based on a three-field mixed formulation is presented. The Variational Multi Scale (VMS) method is used to circumvent the LBB stability condition allowing the use of linear piece-wise interpolations for displacement, stress and pressure fields, respectively. The result is an enhanced stress field approximation which enables for stress-accurate results in nonlinear computational mechanics. The use of an independent nodal variable for the pressure field allows for an adhoc treatment of the incompressibility constraint. This is a mandatory requirement due to the isochoric nature of the plastic strain in metal forming processes. The highly non-linear stress field typically encountered in the Friction Stir Welding (FSW) process is used as an example to show the performance of this new FE technology. The numerical simulation of the FSW process is tackled by means of an Arbitrary-Lagrangian-Eulerian (ALE) formulation. The computational domain is split into three different zones: the work.piece (defined by a rigid visco-plastic behaviour in the Eulerian framework), the pin (within the Lagrangian framework) and finally the stirzone (ALE formulation). A fully coupled thermo-mechanical analysis is introduced showing the heat fluxes generated by the plastic dissipation in the stir-zone (Sheppard rigid-viscoplastic constitutive model) as well as the frictional dissipation at the contact interface (Norton frictional contact model). Finally, tracers have been implemented to show the material flow around the pin allowing a better understanding of the welding mechanism. Numerical results are compared with experimental evidence.

  4. The Fourier transform method and the SD-bar approach for the analytical and numerical treatment of multicenter overlap-like quantum similarity integrals

    SciTech Connect

    Safouhi, Hassan . E-mail: hassan.safouhi@ualberta.ca; Berlu, Lilian

    2006-07-20

    Molecular overlap-like quantum similarity measurements imply the evaluation of overlap integrals of two molecular electronic densities related by Dirac delta function. When the electronic densities are expanded over atomic orbitals using the usual LCAO-MO approach (linear combination of atomic orbitals), overlap-like quantum similarity integrals could be expressed in terms of four-center overlap integrals. It is shown that by introducing the Fourier transform of delta Dirac function in the integrals and using the Fourier transform approach combined with the so-called B functions, one can obtain analytic expressions of the integrals under consideration. These analytic expressions involve highly oscillatory semi-infinite spherical Bessel functions, which are the principal source of severe numerical and computational difficulties. In this work, we present a highly efficient algorithm for a fast and accurate numerical evaluation of these multicenter overlap-like quantum similarity integrals over Slater type functions. This algorithm is based on the SD-bar approach due to Safouhi. Recurrence formulae are used for a better control of the degree of accuracy and for a better stability of the algorithm. The numerical result section shows the efficiency of our algorithm, compared with the alternatives using the one-center two-range expansion method, which led to very complicated analytic expressions, the epsilon algorithm and the nonlinear D-bar transformation.

  5. Conservation properties of numerical integration methods for systems of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Rosenbaum, J. S.

    1976-01-01

    If a system of ordinary differential equations represents a property conserving system that can be expressed linearly (e.g., conservation of mass), it is then desirable that the numerical integration method used conserve the same quantity. It is shown that both linear multistep methods and Runge-Kutta methods are 'conservative' and that Newton-type methods used to solve the implicit equations preserve the inherent conservation of the numerical method. It is further shown that a method used by several authors is not conservative.

  6. Numerical implementation of the mixed potential integral equation for planar structures with ferrite layers arbitrarily magnetized

    NASA Astrophysics Data System (ADS)

    Mesa, F.; Medina, F.

    2006-12-01

    This work presents a new implementation of the mixed potential integral equation (MPIE) for planar structures that can include ferrite layers arbitrarily magnetized. The implementation of the MPIE here reported is carried out in the space domain. Thus it will combine the well-known numerical advantages of working with potentials as well as the flexibility for analyzing nonrectangular shape conductors with the additional ability of including anisotropic layers of arbitrarily magnetized ferrites. In this way, our approach widens the scope of the space domain MPIE and sets this method as a very efficient and versatile numerical tool to deal with a wide class of planar microwave circuits and antennas.

  7. Melt-rock reaction in the asthenospheric mantle: Perspectives from high-order accurate numerical simulations in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Tirupathi, S.; Schiemenz, A. R.; Liang, Y.; Parmentier, E.; Hesthaven, J.

    2013-12-01

    The style and mode of melt migration in the mantle are important to the interpretation of basalts erupted on the surface. Both grain-scale diffuse porous flow and channelized melt migration have been proposed. To better understand the mechanisms and consequences of melt migration in a heterogeneous mantle, we have undertaken a numerical study of reactive dissolution in an upwelling and viscously deformable mantle where solubility of pyroxene increases upwards. Our setup is similar to that described in [1], except we use a larger domain size in 2D and 3D and a new numerical method. To enable efficient simulations in 3D through parallel computing, we developed a high-order accurate numerical method for the magma dynamics problem using discontinuous Galerkin methods and constructed the problem using the numerical library deal.II [2]. Linear stability analyses of the reactive dissolution problem reveal three dynamically distinct regimes [3] and the simulations reported in this study were run in the stable regime and the unstable wave regime where small perturbations in porosity grows periodically. The wave regime is more relevant to melt migration beneath the mid-ocean ridges but computationally more challenging. Extending the 2D simulations in the stable regime in [1] to 3D using various combinations of sustained perturbations in porosity at the base of the upwelling column (which may result from a viened mantle), we show the geometry and distribution of dunite channel and high-porosity melt channels are highly correlated with inflow perturbation through superposition. Strong nonlinear interactions among compaction, dissolution, and upwelling give rise to porosity waves and high-porosity melt channels in the wave regime. These compaction-dissolution waves have well organized but time-dependent structures in the lower part of the simulation domain. High-porosity melt channels nucleate along nodal lines of the porosity waves, growing downwards. The wavelength scales

  8. A comparison of the efficiency of numerical methods for integrating chemical kinetic rate equations

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    1984-01-01

    A comparison of the efficiency of several algorithms recently developed for the efficient numerical integration of stiff ordinary differential equations is presented. The methods examined include two general-purpose codes EPISODE and LSODE and three codes (CHEMEQ, CREK1D, and GCKP84) developed specifically to integrate chemical kinetic rate equations. The codes are applied to two test problems drawn from combustion kinetics. The comparisons show that LSODE is the fastest code currently available for the integration of combustion kinetic rate equations. An important finding is that an iterative solution of the algebraic energy conservation equation to compute the temperature can be more efficient than evaluating the temperature by integrating its time-derivative.

  9. Comparison of symbolic and numerical integration methods for an assumed-stress hybrid shell element

    NASA Technical Reports Server (NTRS)

    Rengarajan, Govind; Knight, Norman F., Jr.; Aminpour, Mohammad A.

    1993-01-01

    Hybrid shell elements have long been regarded with reserve by the commercial finite element developers despite the high degree of reliability and accuracy associated with such formulations. The fundamental reason is the inherent higher computational cost of the hybrid approach as compared to the displacement-based formulations. However, a noteworthy factor in favor of hybrid elements is that numerical integration to generate element matrices can be entirely avoided by the use of symbolic integration. In this paper, the use of the symbolic computational approach is presented for an assumed-stress hybrid shell element with drilling degrees of freedom and the significant time savings achieved is demonstrated through an example.

  10. Numerical simulation and experimental research of the integrated high-power LED radiator

    NASA Astrophysics Data System (ADS)

    Xiang, J. H.; Zhang, C. L.; Gan, Z. J.; Zhou, C.; Chen, C. G.; Chen, S.

    2017-01-01

    The thermal management has become an urgent problem to be solved with the increasing power and the improving integration of the LED (light emitting diode) chip. In order to eliminate the contact resistance of the radiator, this paper presented an integrated high-power LED radiator based on phase-change heat transfer, which realized the seamless connection between the vapor chamber and the cooling fins. The radiator was optimized by combining the numerical simulation and the experimental research. The effects of the chamber diameter and the parameters of fin on the heat dissipation performance were analyzed. The numerical simulation results were compared with the measured values by experiment. The results showed that the fin thickness, the fin number, the fin height and the chamber diameter were the factors which affected the performance of radiator from primary to secondary.

  11. Numerical evaluation of the Rayleigh integral for planar radiators using the FFT

    NASA Technical Reports Server (NTRS)

    Williams, E. G.; Maynard, J. D.

    1982-01-01

    Rayleigh's integral formula is evaluated numerically for planar radiators of any shape, with any specified velocity in the source plane using the fast Fourier transfrom algorithm. The major advantage of this technique is its speed of computation - over 400 times faster than a straightforward two-dimensional numerical integration. The technique is developed for computation of the radiated pressure in the nearfield of the source and can be easily extended to provide, with little computation time, the vector intensity in the nearfield. Computations with the FFT of the nearfield pressure of baffled rectangular plates with clamped and free boundaries are compared with the 'exact' solution to illuminate any errors. The bias errors, introduced by the FFT, are investigated and a technique is developed to significantly reduce them.

  12. Numerical evaluation of two-center integrals over Slater type orbitals

    NASA Astrophysics Data System (ADS)

    Kurt, S. A.; Yükçü, N.

    2016-03-01

    Slater Type Orbitals (STOs) which one of the types of exponential type orbitals (ETOs) are used usually as basis functions in the multicenter molecular integrals to better understand physical and chemical properties of matter. In this work, we develop algorithms for two-center overlap and two-center two-electron hybrid and Coulomb integrals which are calculated with help of translation method for STOs and some auxiliary functions by V. Magnasco's group. We use Mathematica programming language to produce algorithms for these calculations. Numerical results for some quantum numbers are presented in the tables. Consequently, we compare our obtained numerical results with the other known literature results and other details of evaluation method are discussed.

  13. Numerical methods for estimating J integral in models with regular rectangular meshes

    NASA Astrophysics Data System (ADS)

    Kozłowiec, B.

    2017-02-01

    Cracks and delaminations are the common structural degradation mechanisms studied recently using numerous methods and techniques. Among them, numerical methods based on FEM analyses are in widespread commercial use. The scope of these methods has focused i.e. on energetic approach to linear elastic fracture mechanics (LEFM) theory, encompassing such quantities as the J-integral and the energy release rate G. This approach enables to introduce damage criteria of analyzed structures without dealing with the details of the physical singularities occurring at the crack tip. In this paper, two numerical methods based on LEFM are used to analyze both isotropic and orthotropic specimens and the results are compared with well-known analytical solutions as well as (in some cases) VCCT results. These methods are optimized for industrial use with simple, rectangular meshes. The verification is made based on two dimensional mode partitioning.

  14. Some remarks on the numerical computation of integrals on an unbounded interval

    NASA Astrophysics Data System (ADS)

    Capobianco, M.; Criscuolo, G.

    2007-08-01

    An account of the error and the convergence theory is given for Gauss?Laguerre and Gauss?Radau?Laguerre quadrature formulae. We develop also truncated models of the original Gauss rules to compute integrals extended over the positive real axis. Numerical examples confirming the theoretical results are given comparing these rules among themselves and with different quadrature formulae proposed by other authors (Evans, Int. J. Comput. Math. 82:721?730, 2005; Gautschi, BIT 31:438?446, 1991).

  15. Numerical implementation of the integral-transform solution to Lamb's point-load problem

    NASA Astrophysics Data System (ADS)

    Georgiadis, H. G.; Vamvatsikos, D.; Vardoulakis, I.

    The present work describes a procedure for the numerical evaluation of the classical integral-transform solution of the transient elastodynamic point-load (axisymmetric) Lamb's problem. This solution involves integrals of rapidly oscillatory functions over semi-infinite intervals and inversion of one-sided (time) Laplace transforms. These features introduce difficulties for a numerical treatment and constitute a challenging problem in trying to obtain results for quantities (e.g. displacements) in the interior of the half-space. To deal with the oscillatory integrands, which in addition may take very large values (pseudo-pole behavior) at certain points, we follow the concept of Longman's method but using as accelerator in the summation procedure a modified Epsilon algorithm instead of the standard Euler's transformation. Also, an adaptive procedure using the Gauss 32-point rule is introduced to integrate in the vicinity of the pseudo-pole. The numerical Laplace-transform inversion is based on the robust Fourier-series technique of Dubner/Abate-Crump-Durbin. Extensive results are given for sub-surface displacements, whereas the limit-case results for the surface displacements compare very favorably with previous exact results.

  16. DE 102 - A numerically integrated ephemeris of the moon and planets spanning forty-four centuries

    NASA Astrophysics Data System (ADS)

    Newhall, X. X.; Standish, E. M.; Williams, J. G.

    1983-08-01

    It is pointed out that the 1960's were the turning point for the generation of lunar and planetary ephemerides. All previous measurements of the positions of solar system bodies were optical angular measurements. New technological improvements leading to immense changes in observational accuracy are related to developments concerning radar, Viking landers on Mars, and laser ranges to lunar corner cube retroreflectors. Suitable numerical integration techniques and more comprehensive physical models were developed to match the accuracy of the modern data types. The present investigation is concerned with the first integrated ephemeris, DE 102, which covers the entire span of the historical astronomical observations of usable accuracy which are known. The fit is made to modern data. The integration spans the time period from 1411 BC to 3002 AD.

  17. Physical and numerical sources of computational inefficiency in integration of chemical kinetic rate equations: Etiology, treatment and prognosis

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.; Radhakrishnan, K.

    1986-01-01

    The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.

  18. Grid cell distortion and MODFLOW's integrated finite-difference numerical solution.

    PubMed

    Romero, Dave M; Silver, Steven E

    2006-01-01

    The ground water flow model MODFLOW inherently implements a nongeneralized integrated finite-difference (IFD) numerical scheme. The IFD numerical scheme allows for construction of finite-difference model grids with curvilinear (piecewise linear) rows. The resulting grid comprises model cells in the shape of trapezoids and is distorted in comparison to a traditional MODFLOW finite-difference grid. A version of MODFLOW-88 (herein referred to as MODFLOW IFD) with the code adapted to make the one-dimensional DELR and DELC arrays two dimensional, so that equivalent conductance between distorted grid cells can be calculated, is described. MODFLOW IFD is used to inspect the sensitivity of the numerical head and velocity solutions to the level of distortion in trapezoidal grid cells within a converging radial flow domain. A test problem designed for the analysis implements a grid oriented such that flow is parallel to columns with converging widths. The sensitivity analysis demonstrates MODFLOW IFD's capacity to numerically derive a head solution and resulting intercell volumetric flow when the internal calculation of equivalent conductance accounts for the distortion of the grid cells. The sensitivity of the velocity solution to grid cell distortion indicates criteria for distorted grid design. In the radial flow test problem described, the numerical head solution is not sensitive to grid cell distortion. The accuracy of the velocity solution is sensitive to cell distortion with error <1% if the angle between the nonparallel sides of trapezoidal cells is <12.5 degrees. The error of the velocity solution is related to the degree to which the spatial discretization of a curve is approximated with piecewise linear segments. Curvilinear finite-difference grid construction adds versatility to spatial discretization of the flow domain. MODFLOW-88's inherent IFD numerical scheme and the test problem results imply that more recent versions of MODFLOW 2000, with minor

  19. Numerical solution of random singular integral equation appearing in crack problems

    NASA Technical Reports Server (NTRS)

    Sambandham, M.; Srivatsan, T. S.; Bharucha-Reid, A. T.

    1986-01-01

    The solution of several elasticity problems, and particularly crack problems, can be reduced to the solution of one-dimensional singular integral equations with a Cauchy-type kernel or to a system of uncoupled singular integral equations. Here a method for the numerical solution of random singular integral equations of Cauchy type is presented. The solution technique involves a Chebyshev series approximation, the coefficients of which are the solutions of a system of random linear equations. This method is applied to the problem of periodic array of straight cracks inside an infinite isotropic elastic medium and subjected to a nonuniform pressure distribution along the crack edges. The statistical properties of the random solution are evaluated numerically, and the random solution is used to determine the values of the stress-intensity factors at the crack tips. The error, expressed as the difference between the mean of the random solution and the deterministic solution, is established. Values of stress-intensity factors at the crack tip for different random input functions are presented.

  20. Comparing numerical integration schemes for time-continuous car-following models

    NASA Astrophysics Data System (ADS)

    Treiber, Martin; Kanagaraj, Venkatesan

    2015-02-01

    When simulating trajectories by integrating time-continuous car-following models, standard integration schemes such as the fourth-order Runge-Kutta method (RK4) are rarely used while the simple Euler method is popular among researchers. We compare four explicit methods both analytically and numerically: Euler's method, ballistic update, Heun's method (trapezoidal rule), and the standard RK4. As performance metrics, we plot the global discretization error as a function of the numerical complexity. We tested the methods on several time-continuous car-following models in several multi-vehicle simulation scenarios with and without discontinuities such as stops or a discontinuous behavior of an external leader. We find that the theoretical advantage of RK4 (consistency order 4) only plays a role if both the acceleration function of the model and the trajectory of the leader are sufficiently often differentiable. Otherwise, we obtain lower (and often fractional) consistency orders. Although, to our knowledge, Heun's method has never been used for integrating car-following models, it turns out to be the best scheme for many practical situations. The ballistic update always prevails over Euler's method although both are of first order.

  1. Influence of gait loads on implant integration in rat tibiae: experimental and numerical analysis.

    PubMed

    Piccinini, Marco; Cugnoni, Joel; Botsis, John; Ammann, Patrick; Wiskott, Anselm

    2014-10-17

    Implanted rat bones play a key role in studies involving fracture healing, bone diseases or drugs delivery among other themes. In most of these studies the implants integration also depends on the animal daily activity and musculoskeletal loads, which affect the implants mechanical environment. However, the tissue adaption to the physiological loads is often filtered through control groups or not inspected. This work aims to investigate experimentally and numerically the effects of the daily activity on the integration of implants inserted in the rat tibia, and to establish a physiological loading condition to analyse the peri-implant bone stresses during gait. Two titanium implants, single and double cortex crossing, are inserted in the rat tibia. The animals are caged under standard conditions and divided in three groups undergoing progressive integration periods. The results highlight a time-dependent increase of bone samples with significant cortical bone loss. The phenomenon is analysed through specimen-specific Finite Element models involving purpose-built musculoskeletal loads. Different boundary conditions replicating the post-surgery bone-implant interaction are adopted. The effects of the gait loads on the implants integration are quantified and agree with the results of the experiments. The observed cortical bone loss can be considered as a transient state of integration due to bone disuse atrophy, initially triggered by a loss of bone-implant adhesion and subsequently by a cyclic opening of the interface.

  2. Numerical integration of the restricted three-body problem with Lie series

    NASA Astrophysics Data System (ADS)

    Abouelmagd, Elbaz I.; Guirao, Juan L. G.; Mostafa, A.

    2014-12-01

    The aim of this work is to present some recurrence formulas for the equations of motion of an infinitesimal body in the planar restricted three-body problem which allow us to integrate numerically this problem via a Lie series approach. For doing this, the equations of motion of the problem are transformed to an origin at one of the libration points and the Lie operator and recurrence formulas for the terms of the Lie series are constructed. In addition, we provide an algorithm that allows us to find any number of Lie series terms and which gives successful calculations for the orbit of the infinitesimal body around one of the libration points. Furthermore, all our mathematical relations are performed under the effect of the zonal harmonic parameters of the bigger primary up to J 4. Finally, a numerical application of these results is given to the case of the Earth-Moon system.

  3. IAS15: a fast, adaptive, high-order integrator for gravitational dynamics, accurate to machine precision over a billion orbits

    NASA Astrophysics Data System (ADS)

    Rein, Hanno; Spiegel, David S.

    2015-01-01

    We present IAS15, a 15th-order integrator to simulate gravitational dynamics. The integrator is based on a Gauß-Radau quadrature and can handle conservative as well as non-conservative forces. We develop a step-size control that can automatically choose an optimal timestep. The algorithm can handle close encounters and high-eccentricity orbits. The systematic errors are kept well below machine precision, and long-term orbit integrations over 109 orbits show that IAS15 is optimal in the sense that it follows Brouwer's law, i.e. the energy error behaves like a random walk. Our tests show that IAS15 is superior to a mixed-variable symplectic integrator and other popular integrators, including high-order ones, in both speed and accuracy. In fact, IAS15 preserves the symplecticity of Hamiltonian systems better than the commonly used nominally symplectic integrators to which we compared it. We provide an open-source implementation of IAS15. The package comes with several easy-to-extend examples involving resonant planetary systems, Kozai-Lidov cycles, close encounters, radiation pressure, quadrupole moment and generic damping functions that can, among other things, be used to simulate planet-disc interactions. Other non-conservative forces can be added easily.

  4. Direct numerical solution of the transonic perturbation integral equation for lifting and nonlifting airfoils

    NASA Technical Reports Server (NTRS)

    Nixon, D.

    1978-01-01

    The linear transonic perturbation integral equation previously derived for nonlifting airfoils is formulated for lifting cases. In order to treat shock wave motions, a strained coordinate system is used in which the shock location is invariant. The tangency boundary conditions are either formulated using the thin airfoil approximation or by using the analytic continuation concept. A direct numerical solution to this equation is derived in contrast to the iterative scheme initially used, and results of both lifting and nonlifting examples indicate that the method is satisfactory.

  5. Extremely Fast Numerical Integration of Ocean Surface Wave Dynamics: Building Blocks for a Higher Order Method

    DTIC Science & Technology

    2006-09-30

    αηηx + βη = 0 (1) where co = gh , α = 3co / 2h and . The KdV equation has the generalized Fourier solution (for periodic and/or quasi... numerical integration of the partial differential equations of surface water waves is the long-term goal of this work. The approach is a...applications of the method. APPROACH We first consider the shallow water equation known as the Korteweg-deVries ( KdV ) equation ): ηt + coηx

  6. Spiking neural network simulation: numerical integration with the Parker-Sochacki method.

    PubMed

    Stewart, Robert D; Bair, Wyeth

    2009-08-01

    Mathematical neuronal models are normally expressed using differential equations. The Parker-Sochacki method is a new technique for the numerical integration of differential equations applicable to many neuronal models. Using this method, the solution order can be adapted according to the local conditions at each time step, enabling adaptive error control without changing the integration timestep. The method has been limited to polynomial equations, but we present division and power operations that expand its scope. We apply the Parker-Sochacki method to the Izhikevich 'simple' model and a Hodgkin-Huxley type neuron, comparing the results with those obtained using the Runge-Kutta and Bulirsch-Stoer methods. Benchmark simulations demonstrate an improved speed/accuracy trade-off for the method relative to these established techniques.

  7. Numerical Modeling of Pressurization of Cryogenic Propellant Tank for Integrated Vehicle Fluid System

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok K.; LeClair, Andre C.; Hedayat, Ali

    2016-01-01

    This paper presents a numerical model of pressurization of a cryogenic propellant tank for the Integrated Vehicle Fluid (IVF) system using the Generalized Fluid System Simulation Program (GFSSP). The IVF propulsion system, being developed by United Launch Alliance, uses boiloff propellants to drive thrusters for the reaction control system as well as to run internal combustion engines to develop power and drive compressors to pressurize propellant tanks. NASA Marshall Space Flight Center (MSFC) has been running tests to verify the functioning of the IVF system using a flight tank. GFSSP, a finite volume based flow network analysis software developed at MSFC, has been used to develop an integrated model of the tank and the pressurization system. This paper presents an iterative algorithm for converging the interface boundary conditions between different component models of a large system model. The model results have been compared with test data.

  8. The strategy for numerical solving of PIES without explicit calculation of singular integrals in 2D potential problems

    NASA Astrophysics Data System (ADS)

    Szerszeń, Krzysztof; Zieniuk, Eugeniusz

    2016-06-01

    The paper presents a strategy for numerical solving of parametric integral equation system (PIES) for 2D potential problems without explicit calculation of singular integrals. The values of these integrals will be expressed indirectly in terms of easy to compute non-singular integrals. The effectiveness of the proposed strategy is investigated with the example of potential problem modeled by the Laplace equation. The strategy simplifies the structure of the program with good the accuracy of the obtained solutions.

  9. Comparison of numerical techniques for integration of stiff ordinary differential equations arising in combustion chemistry

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    1984-01-01

    The efficiency and accuracy of several algorithms recently developed for the efficient numerical integration of stiff ordinary differential equations are compared. The methods examined include two general-purpose codes, EPISODE and LSODE, and three codes (CHEMEQ, CREK1D, and GCKP84) developed specifically to integrate chemical kinetic rate equations. The codes are applied to two test problems drawn from combustion kinetics. The comparisons show that LSODE is the fastest code currently available for the integration of combustion kinetic rate equations. An important finding is that an interactive solution of the algebraic energy conservation equation to compute the temperature does not result in significant errors. In addition, this method is more efficient than evaluating the temperature by integrating its time derivative. Significant reductions in computational work are realized by updating the rate constants (k = at(supra N) N exp(-E/RT) only when the temperature change exceeds an amount delta T that is problem dependent. An approximate expression for the automatic evaluation of delta T is derived and is shown to result in increased efficiency.

  10. Numerical simulation of a lattice polymer model at its integrable point

    NASA Astrophysics Data System (ADS)

    Bedini, A.; Owczarek, A. L.; Prellberg, T.

    2013-07-01

    We revisit an integrable lattice model of polymer collapse using numerical simulations. This model was first studied by Blöte and Nienhuis (1989 J. Phys. A: Math. Gen. 22 1415) and it describes polymers with some attraction, providing thus a model for the polymer collapse transition. At a particular set of Boltzmann weights the model is integrable and the exponents ν = 12/23 ≈ 0.522 and γ = 53/46 ≈ 1.152 have been computed via identification of the scaling dimensions xt = 1/12 and xh = -5/48. We directly investigate the polymer scaling exponents via Monte Carlo simulations using the pruned-enriched Rosenbluth method algorithm. By simulating this polymer model for walks up to length 4096 we find ν = 0.576(6) and γ = 1.045(5), which are clearly different from the predicted values. Our estimate for the exponent ν is compatible with the known θ-point value of 4/7 and in agreement with very recent numerical evaluation by Foster and Pinettes (2012 J. Phys. A: Math. Theor. 45 505003).

  11. Numerical simulation of Stokes flow around particles via a hybrid Finite Difference-Boundary Integral scheme

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Amitabh

    2013-11-01

    An efficient algorithm for simulating Stokes flow around particles is presented here, in which a second order Finite Difference method (FDM) is coupled to a Boundary Integral method (BIM). This method utilizes the strong points of FDM (i.e. localized stencil) and BIM (i.e. accurate representation of particle surface). Specifically, in each iteration, the flow field away from the particles is solved on a Cartesian FDM grid, while the traction on the particle surface (given the the velocity of the particle) is solved using BIM. The two schemes are coupled by matching the solution in an intermediate region between the particle and surrounding fluid. We validate this method by solving for flow around an array of cylinders, and find good agreement with Hasimoto's (J. Fluid Mech. 1959) analytical results.

  12. Evaluation of 3 numerical methods for propulsion integration studies on transonic transport configurations

    NASA Technical Reports Server (NTRS)

    Yaros, S. F.; Carlson, J. R.; Chandrasekaran, B.

    1986-01-01

    An effort has been undertaken at the NASA Langley Research Center to assess the capabilities of available computational methods for use in propulsion integration design studies of transonic transport aircraft, particularly of pylon/nacelle combinations which exhibit essentially no interference drag. The three computer codes selected represent state-of-the-art computational methods for analyzing complex configurations at subsonic and transonic flight conditions. These are: EULER, a finitie volume solution of the Euler equation; VSAERO, a panel solution of the Laplace equation; and PPW, a finite difference solution of the small disturbance transonic equations. In general, all three codes have certain capabilities that allow them to be of some value in predicting the flows about transport configurations, but all have limitations. Until more accurate methods are available, careful application and interpretation of the results of these codes are needed.

  13. Evaluation of three numerical methods for propulsion integration studies on transonic transport configurations

    NASA Technical Reports Server (NTRS)

    Yaros, Steven F.; Carlson, John R.; Chandrasekaran, Balasubramanyan

    1986-01-01

    An effort has been undertaken at the NASA Langley Research Center to assess the capabilities of available computational methods for use in propulsion integration design studies of transonic transport aircraft, particularly of pylon/nacelle combinations which exhibit essentially no interference drag. The three computer codes selected represent state-of-the-art computational methods for analyzing complex configurations at subsonic and transonic flight conditions. These are: EULER, a finite volume solution of the Euler equation; VSAERO, a panel solution of the Laplace equation; and PPW, a finite difference solution of the small disturbance transonic equations. In general, all three codes have certain capabilities that allow them to be of some value in predicting the flows about transport configurations, but all have limitations. Until more accurate methods are available, careful application and interpretation of the results of these codes are needed.

  14. Algebraic Stabilization of Explicit Numerical Integration for Extremely Stiff Reaction Networks

    SciTech Connect

    Guidry, Mike W

    2012-01-01

    In contrast to the prevailing view in the literature, it is shown that even extremely stiff sets of ordinary differential equations may be solved efficiently by explicit methods if limiting algebraic solutions are used to stabilize the numerical integration. The stabilizing algebra differs essentially for systems well removed from equilibrium and those near equilibrium. Explicit asymptotic and quasi-steady-state methods that are appropriate when the system is only weakly equilibrated are examined first. These methods are then extended to the case of close approach to equilibrium through a new implementation of partial equilibrium approximations. Using stringent tests with astrophysical thermonuclear networks, evidence is provided that these methods can deal with the stiffest networks, even in the approach to equilibrium, with accuracy and integration timestepping comparable to that of implicit methods. Because explicit methods can execute a timestep faster and scale more favorably with network size than implicit algorithms, our results suggest that algebraically stabilized explicit methods might enable integration of larger reaction networks coupled to fluid dynamics than has been feasible previously for a variety of disciplines.

  15. Numerical analysis of composite STEEL-CONCRETE SECTIONS using integral equation of Volterra

    NASA Astrophysics Data System (ADS)

    Partov, Doncho; Kantchev, Vesselin

    2011-09-01

    The paper presents analysis of the stress and deflections changes due to creep in statically determinate composite steel-concrete beam. The mathematical model involves the equation of equilibrium, compatibility and constitutive relationship, i.e. an elastic law for the steel part and an integral-type creep law of Boltzmann — Volterra for the concrete part. On the basis of the theory of the viscoelastic body of Arutyunian-Trost-Bažant for determining the redistribution of stresses in beam section between concrete plate and steel beam with respect to time "t", two independent Volterra integral equations of the second kind have been derived. Numerical method based on linear approximation of the singular kernal function in the integral equation is presented. Example with the model proposed is investigated. The creep functions is suggested by the model CEB MC90-99 and the "ACI 209R-92 model. The elastic modulus of concrete E c (t) is assumed to be constant in time `t'. The obtained results from the both models are compared.

  16. Numerical analysis of composite STEEL-CONCRETE SECTIONS using integral equation of Volterra

    NASA Astrophysics Data System (ADS)

    Partov, Doncho; Kantchev, Vesselin

    2011-09-01

    The paper presents analysis of the stress and deflections changes due to creep in statically determinate composite steel-concrete beam. The mathematical model involves the equation of equilibrium, compatibility and constitutive relationship, i.e. an elastic law for the steel part and an integral-type creep law of Boltzmann — Volterra for the concrete part. On the basis of the theory of the viscoelastic body of Arutyunian-Trost-Bažant for determining the redistribution of stresses in beam section between concrete plate and steel beam with respect to time "t", two independent Volterra integral equations of the second kind have been derived. Numerical method based on linear approximation of the singular kernal function in the integral equation is presented. Example with the model proposed is investigated. The creep functions is suggested by the model CEB MC90-99 and the "ACI 209R-92 model. The elastic modulus of concrete Ec(t) is assumed to be constant in time `t'. The obtained results from the both models are compared.

  17. Integration of a silicon-based microprobe into a gear measuring instrument for accurate measurement of micro gears

    NASA Astrophysics Data System (ADS)

    Ferreira, N.; Krah, T.; Jeong, D. C.; Metz, D.; Kniel, K.; Dietzel, A.; Büttgenbach, S.; Härtig, F.

    2014-06-01

    The integration of silicon micro probing systems into conventional gear measuring instruments (GMIs) allows fully automated measurements of external involute micro spur gears of normal modules smaller than 1 mm. This system, based on a silicon microprobe, has been developed and manufactured at the Institute for Microtechnology of the Technische Universität Braunschweig. The microprobe consists of a silicon sensor element and a stylus which is oriented perpendicularly to the sensor. The sensor is fabricated by means of silicon bulk micromachining. Its small dimensions of 6.5 mm × 6.5 mm allow compact mounting in a cartridge to facilitate the integration into a GMI. In this way, tactile measurements of 3D microstructures can be realized. To enable three-dimensional measurements with marginal forces, four Wheatstone bridges are built with diffused piezoresistors on the membrane of the sensor. On the reverse of the membrane, the stylus is glued perpendicularly to the sensor on a boss to transmit the probing forces to the sensor element during measurements. Sphere diameters smaller than 300 µm and shaft lengths of 5 mm as well as measurement forces from 10 µN enable the measurements of 3D microstructures. Such micro probing systems can be integrated into universal coordinate measuring machines and also into GMIs to extend their field of application. Practical measurements were carried out at the Physikalisch-Technische Bundesanstalt by qualifying the microprobes on a calibrated reference sphere to determine their sensitivity and their physical dimensions in volume. Following that, profile and helix measurements were carried out on a gear measurement standard with a module of 1 mm. The comparison of the measurements shows good agreement between the measurement values and the calibrated values. This result is a promising basis for the realization of smaller probe diameters for the tactile measurement of micro gears with smaller modules.

  18. Simulation of Accurate Vibrationally Resolved Electronic Spectra: the Integrated Time-Dependent and Time-Independent Framework

    NASA Astrophysics Data System (ADS)

    Baiardi, Alberto; Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien

    2014-06-01

    Two parallel theories including Franck-Condon, Herzberg-Teller and Duschinsky (i.e., mode mixing) effects, allowing different approximations for the description of excited state PES have been developed in order to simulate realistic, asymmetric, electronic spectra line-shapes taking into account the vibrational structure: the so-called sum-over-states or time-independent (TI) method and the alternative time-dependent (TD) approach, which exploits the properties of the Fourier transform. The integrated TI-TD procedure included within a general purpose QM code [1,2], allows to compute one photon absorption, fluorescence, phosphorescence, electronic circular dichroism, circularly polarized luminescence and resonance Raman spectra. Combining both approaches, which use a single set of starting data, permits to profit from their respective advantages and minimize their respective limits: the time-dependent route automatically includes all vibrational states and, possibly, temperature effects, while the time-independent route allows to identify and assign single vibronic transitions. Interpretation, analysis and assignment of experimental spectra based on integrated TI-TD vibronic computations will be illustrated for challenging cases of medium-sized open-shell systems in the gas and condensed phases with inclusion of leading anharmonic effects. 1. V. Barone, A. Baiardi, M. Biczysko, J. Bloino, C. Cappelli, F. Lipparini Phys. Chem. Chem. Phys, 14, 12404, (2012) 2. A. Baiardi, V. Barone, J. Bloino J. Chem. Theory Comput., 9, 4097-4115 (2013)

  19. Assessment method of numerical integration used in measuring profile based on ultra-precise thin light beam scanning

    NASA Astrophysics Data System (ADS)

    Lang, Zhi-Guo; Tan, Jiu-Bin

    2009-11-01

    In order to improve the precision of profile measurement based on ultra-precise thin light beam scanning, an assessment method that compares different numerical integration algorithms in frequency-domain is put forward. The compared numerical integration methods are regarded as recursive digital filters. Through comparing their functions of frequency response in frequency-domain, the delivering role of noise with different frequencies can be analyzed directly and clearly in the process of integrating measured slope data. Analyzing results show that the method of cubic spline is better than trapezoidal, Simpson and 3/8 Simpson rules.

  20. An integrated data-directed numerical method for estimating the undiscovered mineral endowment in a region

    USGS Publications Warehouse

    McCammon, R.B.; Finch, W.I.; Kork, J.O.; Bridges, N.J.

    1994-01-01

    An integrated data-directed numerical method has been developed to estimate the undiscovered mineral endowment within a given area. The method has been used to estimate the undiscovered uranium endowment in the San Juan Basin, New Mexico, U.S.A. The favorability of uranium concentration was evaluated in each of 2,068 cells defined within the Basin. Favorability was based on the correlated similarity of the geologic characteristics of each cell to the geologic characteristics of five area-related deposit models. Estimates of the undiscovered endowment for each cell were categorized according to deposit type, depth, and cutoff grade. The method can be applied to any mineral or energy commodity provided that the data collected reflect discovered endowment. ?? 1994 Oxford University Press.

  1. A prefiltering version of the Kalman filter with new numerical integration formulas for Riccati equations

    NASA Technical Reports Server (NTRS)

    Womble, M. E.; Potter, J. E.

    1975-01-01

    A prefiltering version of the Kalman filter is derived for both discrete and continuous measurements. The derivation consists of determining a single discrete measurement that is equivalent to either a time segment of continuous measurements or a set of discrete measurements. This prefiltering version of the Kalman filter easily handles numerical problems associated with rapid transients and ill-conditioned Riccati matrices. Therefore, the derived technique for extrapolating the Riccati matrix from one time to the next constitutes a new set of integration formulas which alleviate ill-conditioning problems associated with continuous Riccati equations. Furthermore, since a time segment of continuous measurements is converted into a single discrete measurement, Potter's square root formulas can be used to update the state estimate and its error covariance matrix. Therefore, if having the state estimate and its error covariance matrix at discrete times is acceptable, the prefilter extends square root filtering with all its advantages, to continuous measurement problems.

  2. Intra-Auditory Integration Improves Motor Performance and Synergy in an Accurate Multi-Finger Pressing Task

    PubMed Central

    Koh, Kyung; Kwon, Hyun Joon; Park, Yang Sun; Kiemel, Tim; Miller, Ross H.; Kim, Yoon Hyuk; Shin, Joon-Ho; Shim, Jae Kun

    2016-01-01

    Humans detect changes in the air pressure and understand the surroundings through the auditory system. The sound humans perceive is composed of two distinct physical properties, frequency and intensity. However, our knowledge is limited how the brain perceives and combines these two properties simultaneously (i.e., intra-auditory integration), especially in relation to motor behaviors. Here, we investigated the effect of intra-auditory integration between the frequency and intensity components of auditory feedback on motor outputs in a constant finger-force production task. The hierarchical variability decomposition model previously developed was used to decompose motor performance into mathematically independent components each of which quantifies a distinct motor behavior such as consistency, repeatability, systematic error, within-trial synergy, or between-trial synergy. We hypothesized that feedback on two components of sound as a function of motor performance (frequency and intensity) would improve motor performance and multi-finger synergy compared to feedback on just one component (frequency or intensity). Subjects were instructed to match the reference force of 18 N with the sum of all finger forces (virtual finger or VF force) while listening to auditory feedback of their accuracy. Three experimental conditions were used: (i) condition F, where frequency changed; (ii) condition I, where intensity changed; (iii) condition FI, where both frequency and intensity changed. Motor performance was enhanced for the FI conditions as compared to either the F or I condition alone. The enhancement of motor performance was achieved mainly by the improved consistency and repeatability. However, the systematic error remained unchanged across conditions. Within- and between-trial synergies were also improved for the FI condition as compared to either the F or I condition alone. However, variability of individual finger forces for the FI condition was not significantly

  3. Regional analysis techniques for integrating experimental and numerical measurements of transport properties of reservoir rocks

    NASA Astrophysics Data System (ADS)

    Alizadeh, S. M.; Latham, S.; Middleton, J.; Limaye, A.; Senden, T. J.; Arns, C. H.

    2017-02-01

    Assessing the mechanisms of micro-structural change and their effect on transport properties using digital core analysis requires balancing field of view and resolution. This typically leads to the compromise of working with relatively small samples, where boundary effects can be substantial. A direct comparison with experiment, as e.g. desirable to eliminate unknown parameters and integrate numerical and physical experiments, needs to consider these boundary effects. Here we develop a workflow to define measuring windows within a sample where these boundary effects are minimised allowing the integration of physical and numerical experiment. We consider in particular sleeve leakage and use a radial partitioning of the solutions to various transport equations to derive relevant regional measures, which may be used for the development of cross-correlations between physical properties. Samples of Bentheimer and Castlegate sandstone as well as Mt. Gambier limestone and a sucrosic dolomite are considered. The sample plugs are encased in rubber sleeves and micro-CT images acquired at ambient conditions. Using these high-resolution images we calculate transport properties, namely permeability and electrical conductivity, and analyse the resulting field solutions with regard to flux across different regions of interest. The latter are selected on the basis of distance to the sample sleeve inner surface. Clear bypassing at the sleeve-sample interface in terms of elevated fluxes is observed for all samples, although to different extent. We consider different sleeve boundary conditions to define a measuring window minimising these effects, use the procedure to compare flux averages defined over these measuring windows with conventional choices of simulation domains, and compare resulting physical cross-correlations.

  4. Orbit determination based on meteor observations using numerical integration of equations of motion

    NASA Astrophysics Data System (ADS)

    Dmitriev, V.; Lupovka, V.; Gritsevich, M.

    2014-07-01

    We review the definitions and approaches to orbital-characteristics analysis applied to photographic or video ground-based observations of meteors. A number of camera networks dedicated to meteors registration were established all over the word, including USA, Canada, Central Europe, Australia, Spain, Finland and Poland. Many of these networks are currently operational. The meteor observations are conducted from different locations hosting the network stations. Each station is equipped with at least one camera for continuous monitoring of the firmament (except possible weather restrictions). For registered multi-station meteors, it is possible to accurately determine the direction and absolute value for the meteor velocity and thus obtain the topocentric radiant. Based on topocentric radiant one further determines the heliocentric meteor orbit. We aim to reduce total uncertainty in our orbit-determination technique, keeping it even less than the accuracy of observations. The additional corrections for the zenith attraction are widely in use and are implemented, for example, here [1]. We propose a technique for meteor-orbit determination with higher accuracy. We transform the topocentric radiant in inertial (J2000) coordinate system using the model recommended by IAU [2]. The main difference if compared to the existing orbit-determination techniques is integration of ordinary differential equations of motion instead of addition correction in visible velocity for zenith attraction. The attraction of the central body (the Sun), the perturbations by Earth, Moon and other planets of the Solar System, the Earth's flattening (important in the initial moment of integration, i.e. at the moment when a meteoroid enters the atmosphere), atmospheric drag may be optionally included in the equations. In addition, reverse integration of the same equations can be performed to analyze orbital evolution preceding to meteoroid's collision with Earth. To demonstrate the developed

  5. Robust numerical method for integration of point-vortex trajectories in two dimensions.

    PubMed

    Smith, Spencer A; Boghosian, Bruce M

    2011-05-01

    The venerable two-dimensional (2D) point-vortex model plays an important role as a simplified version of many disparate physical systems, including superfluids, Bose-Einstein condensates, certain plasma configurations, and inviscid turbulence. This system is also a veritable mathematical playground, touching upon many different disciplines from topology to dynamic systems theory. Point-vortex dynamics are described by a relatively simple system of nonlinear ordinary differential equations which can easily be integrated numerically using an appropriate adaptive time stepping method. As the separation between a pair of vortices relative to all other intervortex length scales decreases, however, the computational time required diverges. Accuracy is usually the most discouraging casualty when trying to account for such vortex motion, though the varying energy of this ostensibly Hamiltonian system is a potentially more serious problem. We solve these problems by a series of coordinate transformations: We first transform to action-angle coordinates, which, to lowest order, treat the close pair as a single vortex amongst all others with an internal degree of freedom. We next, and most importantly, apply Lie transform perturbation theory to remove the higher-order correction terms in succession. The overall transformation drastically increases the numerical efficiency and ensures that the total energy remains constant to high accuracy.

  6. Robust numerical method for integration of point-vortex trajectories in two dimensions

    NASA Astrophysics Data System (ADS)

    Smith, Spencer A.; Boghosian, Bruce M.

    2011-05-01

    The venerable two-dimensional (2D) point-vortex model plays an important role as a simplified version of many disparate physical systems, including superfluids, Bose-Einstein condensates, certain plasma configurations, and inviscid turbulence. This system is also a veritable mathematical playground, touching upon many different disciplines from topology to dynamic systems theory. Point-vortex dynamics are described by a relatively simple system of nonlinear ordinary differential equations which can easily be integrated numerically using an appropriate adaptive time stepping method. As the separation between a pair of vortices relative to all other intervortex length scales decreases, however, the computational time required diverges. Accuracy is usually the most discouraging casualty when trying to account for such vortex motion, though the varying energy of this ostensibly Hamiltonian system is a potentially more serious problem. We solve these problems by a series of coordinate transformations: We first transform to action-angle coordinates, which, to lowest order, treat the close pair as a single vortex amongst all others with an internal degree of freedom. We next, and most importantly, apply Lie transform perturbation theory to remove the higher-order correction terms in succession. The overall transformation drastically increases the numerical efficiency and ensures that the total energy remains constant to high accuracy.

  7. A Numerical Method for Integrating the Kinetic Equation of Coalescence and Breakup of Cloud Droplets.

    NASA Astrophysics Data System (ADS)

    Enukashvily, Isaac M.

    1980-11-01

    An extension of Bleck' method and of the method of moments is developed for the numerical integration of the kinetic equation of coalescence and breakup of cloud droplets. The number density function nk(x,t) in each separate cloud droplet packet between droplet mass grid points (xk,xk+1) is represented by an expansion in orthogonal polynomials with a given weighting function wk(x,k). The expansion coefficients describe the deviations of nk(x,t) from wk(x,k). In this way droplet number concentrations, liquid water contents and other moments in each droplet packet are conserved, and the problem of solving the kinetic equation is replaced by one of solving a set of coupled differential equations for the moments of the number density function nk(x,t). Equations for these moments in each droplet packet are derived. The method is tested against existing solutions of the coalescence equation. Numerical results are obtained when Bleck's uniform distribution hypothesis for nk(x,t) and Golovin's asymptotic solution of the coalescence equation is chosen for the, weighting function wk(x, k). A comparison between numerical results computed by Bleck's method and by the method of this study is made. It is shown that for the correct computation of the coalescence and breakup interactions between cloud droplet packets it is very important that the, approximation of the nk(x,t) between grid points (xk,xk+1) satisfies the conservation conditions for the number concentration, liquid water content and other moments of the cloud droplet packets. If these conservation conditions are provided, even the quasi-linear approximation of the nk(x,t) in comparison with Berry's six-point interpolation will give reasonable results which are very close to the existing analytic solutions.

  8. A multiple hypotheses uncertainty analysis in hydrological modelling: about model structure, landscape parameterization, and numerical integration

    NASA Astrophysics Data System (ADS)

    Pilz, Tobias; Francke, Till; Bronstert, Axel

    2016-04-01

    Until today a large number of competing computer models has been developed to understand hydrological processes and to simulate and predict streamflow dynamics of rivers. This is primarily the result of a lack of a unified theory in catchment hydrology due to insufficient process understanding and uncertainties related to model development and application. Therefore, the goal of this study is to analyze the uncertainty structure of a process-based hydrological catchment model employing a multiple hypotheses approach. The study focuses on three major problems that have received only little attention in previous investigations. First, to estimate the impact of model structural uncertainty by employing several alternative representations for each simulated process. Second, explore the influence of landscape discretization and parameterization from multiple datasets and user decisions. Third, employ several numerical solvers for the integration of the governing ordinary differential equations to study the effect on simulation results. The generated ensemble of model hypotheses is then analyzed and the three sources of uncertainty compared against each other. To ensure consistency and comparability all model structures and numerical solvers are implemented within a single simulation environment. First results suggest that the selection of a sophisticated numerical solver for the differential equations positively affects simulation outcomes. However, already some simple and easy to implement explicit methods perform surprisingly well and need less computational efforts than more advanced but time consuming implicit techniques. There is general evidence that ambiguous and subjective user decisions form a major source of uncertainty and can greatly influence model development and application at all stages.

  9. Integrating bioassessment and ecological risk assessment: an approach to developing numerical water-quality criteria.

    PubMed

    King, Ryan S; Richardson, Curtis J

    2003-06-01

    Ioassessment is used worldwide to monitor aquatic health but is infrequently used with risk-assessment objectives, such as supporting the development of defensible, numerical water-quality criteria. To this end, we present a generalized approach for detecting potential ecological thresholds using assemblage-level attributes and a multimetric index (Index of Biological Integrity-IBI) as endpoints in response to numerical changes in water quality. To illustrate the approach, we used existing macroinvertebrate and surface-water total phosphorus (TP) datasets from an observed P gradient and a P-dosing experiment in wetlands of the south Florida coastal plain nutrient ecoregion. Ten assemblage attributes were identified as potential metrics using the observational data, and five were validated in the experiment. These five core metrics were subjected individually and as an aggregated Nutrient-IBI to nonparametric changepoint analysis (nCPA) to estimate cumulative probabilities of a threshold response to TP. Threshold responses were evident for all metrics and the IBI, and were repeatable through time. Results from the observed gradient indicated that a threshold was > or = 50% probable between 12.6 and 19.4 microg/L TP for individual metrics and 14.8 microg/L TP for the IBI. Results from the P-dosing experiment revealed > or = 50% probability of a response between 11.2 and 13.0 microg/L TP for the metrics and 12.3 microg/L TP for the IBI. Uncertainty analysis indicated a low (typically > or = 5%) probability that an IBI threshold occurred at < or = 10 microg/L TP, while there was > or = 95% certainty that the threshold was < or = 17 microg/L TP. The weight-of-evidence produced from these analyses implies that a TP concentration > 12-15 microg/L is likely to cause degradation of macroinvertebrate assemblage structure and function, a reflection of biological integrity, in the study area. This finding may assist in the development of a numerical water-quality criterion for

  10. Quantum free-energy differences from nonequilibrium path integrals. I. Methods and numerical application.

    PubMed

    van Zon, Ramses; Hernández de la Peña, Lisandro; Peslherbe, Gilles H; Schofield, Jeremy

    2008-10-01

    In this paper, the imaginary-time path-integral representation of the canonical partition function of a quantum system and nonequilibrium work fluctuation relations are combined to yield methods for computing free-energy differences in quantum systems using nonequilibrium processes. The path-integral representation is isomorphic to the configurational partition function of a classical field theory, to which a natural but fictitious Hamiltonian dynamics is associated. It is shown that if this system is prepared in an equilibrium state, after which a control parameter in the fictitious Hamiltonian is changed in a finite time, then formally the Jarzynski nonequilibrium work relation and the Crooks fluctuation relation hold, where work is defined as the change in the energy as given by the fictitious Hamiltonian. Since the energy diverges for the classical field theory in canonical equilibrium, two regularization methods are introduced which limit the number of degrees of freedom to be finite. The numerical applicability of the methods is demonstrated for a quartic double-well potential with varying asymmetry. A general parameter-free smoothing procedure for the work distribution functions is useful in this context.

  11. Shear Behavior of 3D Woven Hollow Integrated Sandwich Composites: Experimental, Theoretical and Numerical Study

    NASA Astrophysics Data System (ADS)

    Zhou, Guangming; Liu, Chang; Cai, Deng'an; Li, Wenlong; Wang, Xiaopei

    2016-11-01

    An experimental, theoretical and numerical investigation on the shear behavior of 3D woven hollow integrated sandwich composites was presented in this paper. The microstructure of the composites was studied, then the shear modulus and load-deflection curves were obtained by double lap shear tests on the specimens in two principal directions of the sandwich panels, called warp and weft. The experimental results showed that the shear modulus of the warp was higher than that of the weft and the failure occurred in the roots of piles. A finite element model was established to predict the shear behavior of the composites. The simulated results agreed well with the experimental data. Simultaneously, a theoretical method was developed to predict the shear modulus. By comparing with the experimental data, the accuracy of the theoretical method was verified. The influence of structural parameters on shear modulus was also discussed. The higher yarn number, yarn density and dip angle of the piles could all improve the shear modulus of 3D woven hollow integrated sandwich composites at different levels, while the increasing height would decrease the shear modulus.

  12. Numerical optimization of integrating cavities for diffraction-limited millimeter-wave bolometer arrays.

    PubMed

    Glenn, Jason; Chattopadhyay, Goutam; Edgington, Samantha F; Lange, Andrew E; Bock, James J; Mauskopf, Philip D; Lee, Adrian T

    2002-01-01

    Far-infrared to millimeter-wave bolometers designed to make astronomical observations are typically encased in integrating cavities at the termination of feedhorns or Winston cones. This photometer combination maximizes absorption of radiation, enables the absorber area to be minimized, and controls the directivity of absorption, thereby reducing susceptibility to stray light. In the next decade, arrays of hundreds of silicon nitride micromesh bolometers with planar architectures will be used in ground-based, suborbital, and orbital platforms for astronomy. The optimization of integrating cavity designs is required for achieving the highest possible sensitivity for these arrays. We report numerical simulations of the electromagnetic fields in integrating cavities with an infinite plane-parallel geometry formed by a solid reflecting backshort and the back surface of a feedhorn array block. Performance of this architecture for the bolometer array camera (Bolocam) for cosmology at a frequency of 214 GHz is investigated. We explore the sensitivity of absorption efficiency to absorber impedance and backshort location and the magnitude of leakage from cavities. The simulations are compared with experimental data from a room-temperature scale model and with the performance of Bolocam at a temperature of 300 mK. The main results of the simulations for Bolocam-type cavities are that (1) monochromatic absorptions as high as 95% are achievable with <1% cross talk between neighboring cavities, (2) the optimum absorber impedances are 400 ohms/sq, but with a broad maximum from approximately 150 to approximately 700 ohms/sq, and (3) maximum absorption is achieved with absorber diameters > or = 1.5 lambda. Good general agreement between the simulations and the experiments was found.

  13. Evidence that bisphenol A (BPA) can be accurately measured without contamination in human serum and urine, and that BPA causes numerous hazards from multiple routes of exposure.

    PubMed

    vom Saal, Frederick S; Welshons, Wade V

    2014-12-01

    There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources.

  14. Evidence that bisphenol A (BPA) can be accurately measured without contamination in human serum and urine, and that BPA causes numerous hazards from multiple routes of exposure

    PubMed Central

    vom Saal, Frederick S.; Welshons, Wade V.

    2016-01-01

    There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources. PMID:25304273

  15. A computationally efficient and accurate numerical representation of thermodynamic properties of steam and water for computations of non-equilibrium condensing steam flow in steam turbines

    NASA Astrophysics Data System (ADS)

    Hrubý, Jan

    2012-04-01

    Mathematical modeling of the non-equilibrium condensing transonic steam flow in the complex 3D geometry of a steam turbine is a demanding problem both concerning the physical concepts and the required computational power. Available accurate formulations of steam properties IAPWS-95 and IAPWS-IF97 require much computation time. For this reason, the modelers often accept the unrealistic ideal-gas behavior. Here we present a computation scheme based on a piecewise, thermodynamically consistent representation of the IAPWS-95 formulation. Density and internal energy are chosen as independent variables to avoid variable transformations and iterations. On the contrary to the previous Tabular Taylor Series Expansion Method, the pressure and temperature are continuous functions of the independent variables, which is a desirable property for the solution of the differential equations of the mass, energy, and momentum conservation for both phases.

  16. Integrative structural annotation of de novo RNA-Seq provides an accurate reference gene set of the enormous genome of the onion (Allium cepa L.).

    PubMed

    Kim, Seungill; Kim, Myung-Shin; Kim, Yong-Min; Yeom, Seon-In; Cheong, Kyeongchae; Kim, Ki-Tae; Jeon, Jongbum; Kim, Sunggil; Kim, Do-Sun; Sohn, Seong-Han; Lee, Yong-Hwan; Choi, Doil

    2015-02-01

    The onion (Allium cepa L.) is one of the most widely cultivated and consumed vegetable crops in the world. Although a considerable amount of onion transcriptome data has been deposited into public databases, the sequences of the protein-coding genes are not accurate enough to be used, owing to non-coding sequences intermixed with the coding sequences. We generated a high-quality, annotated onion transcriptome from de novo sequence assembly and intensive structural annotation using the integrated structural gene annotation pipeline (ISGAP), which identified 54,165 protein-coding genes among 165,179 assembled transcripts totalling 203.0 Mb by eliminating the intron sequences. ISGAP performed reliable annotation, recognizing accurate gene structures based on reference proteins, and ab initio gene models of the assembled transcripts. Integrative functional annotation and gene-based SNP analysis revealed a whole biological repertoire of genes and transcriptomic variation in the onion. The method developed in this study provides a powerful tool for the construction of reference gene sets for organisms based solely on de novo transcriptome data. Furthermore, the reference genes and their variation described here for the onion represent essential tools for molecular breeding and gene cloning in Allium spp.

  17. Integrating metabolic performance, thermal tolerance, and plasticity enables for more accurate predictions on species vulnerability to acute and chronic effects of global warming.

    PubMed

    Magozzi, Sarah; Calosi, Piero

    2015-01-01

    Predicting species vulnerability to global warming requires a comprehensive, mechanistic understanding of sublethal and lethal thermal tolerances. To date, however, most studies investigating species physiological responses to increasing temperature have focused on the underlying physiological traits of either acute or chronic tolerance in isolation. Here we propose an integrative, synthetic approach including the investigation of multiple physiological traits (metabolic performance and thermal tolerance), and their plasticity, to provide more accurate and balanced predictions on species and assemblage vulnerability to both acute and chronic effects of global warming. We applied this approach to more accurately elucidate relative species vulnerability to warming within an assemblage of six caridean prawns occurring in the same geographic, hence macroclimatic, region, but living in different thermal habitats. Prawns were exposed to four incubation temperatures (10, 15, 20 and 25 °C) for 7 days, their metabolic rates and upper thermal limits were measured, and plasticity was calculated according to the concept of Reaction Norms, as well as Q10 for metabolism. Compared to species occupying narrower/more stable thermal niches, species inhabiting broader/more variable thermal environments (including the invasive Palaemon macrodactylus) are likely to be less vulnerable to extreme acute thermal events as a result of their higher upper thermal limits. Nevertheless, they may be at greater risk from chronic exposure to warming due to the greater metabolic costs they incur. Indeed, a trade-off between acute and chronic tolerance was apparent in the assemblage investigated. However, the invasive species P. macrodactylus represents an exception to this pattern, showing elevated thermal limits and plasticity of these limits, as well as a high metabolic control. In general, integrating multiple proxies for species physiological acute and chronic responses to increasing

  18. Integrating laboratory creep compaction data with numerical fault models: A Bayesian framework

    USGS Publications Warehouse

    Fitzenz, D.D.; Jalobeanu, A.; Hickman, S.H.

    2007-01-01

    We developed a robust Bayesian inversion scheme to plan and analyze laboratory creep compaction experiments. We chose a simple creep law that features the main parameters of interest when trying to identify rate-controlling mechanisms from experimental data. By integrating the chosen creep law or an approximation thereof, one can use all the data, either simultaneously or in overlapping subsets, thus making more complete use of the experiment data and propagating statistical variations in the data through to the final rate constants. Despite the nonlinearity of the problem, with this technique one can retrieve accurate estimates of both the stress exponent and the activation energy, even when the porosity time series data are noisy. Whereas adding observation points and/or experiments reduces the uncertainty on all parameters, enlarging the range of temperature or effective stress significantly reduces the covariance between stress exponent and activation energy. We apply this methodology to hydrothermal creep compaction data on quartz to obtain a quantitative, semiempirical law for fault zone compaction in the interseismic period. Incorporating this law into a simple direct rupture model, we find marginal distributions of the time to failure that are robust with respect to errors in the initial fault zone porosity. Copyright 2007 by the American Geophysical Union.

  19. The numerical integration of fundamental diffraction integrals for converging polarized spherical waves using a two-dimensional form of Simpson's 1/3 Rule

    NASA Astrophysics Data System (ADS)

    Cooper, I. J.; Sheppard, C. J. R.; Roy, M.

    2005-08-01

    A comprehensive matrix method based upon a two-dimensional form of Simpson's 1/3 rule (2DSC method) to integrate numerically the vector form of the fundamental diffraction integrals is described for calculating the characteristics of the focal region for a converging polarized spherical wave. The only approximation needed in using the 2DSC method is the Kirchhoff boundary conditions at the aperture. The 2DSC method can be used to study the focusing of vector beams with different polarizations and profiles and for different filters over a large range of numerical apertures or Fresnel numbers.

  20. Integration of Genetic and Phenotypic Data in 48 Lineages of Philippine Birds Shows Heterogeneous Divergence Processes and Numerous Cryptic Species

    PubMed Central

    Campbell, Kyle K.; Braile, Thomas

    2016-01-01

    The Philippine Islands are one of the most biologically diverse archipelagoes in the world. Current taxonomy, however, may underestimate levels of avian diversity and endemism in these islands. Although species limits can be difficult to determine among allopatric populations, quantitative methods for comparing phenotypic and genotypic data can provide useful metrics of divergence among populations and identify those that merit consideration for elevation to full species status. Using a conceptual approach that integrates genetic and phenotypic data, we compared populations among 48 species, estimating genetic divergence (p-distance) using the mtDNA marker ND2 and comparing plumage and morphometrics of museum study skins. Using conservative speciation thresholds, pairwise comparisons of genetic and phenotypic divergence suggested possible species-level divergences in more than half of the species studied (25 out of 48). In speciation process space, divergence routes were heterogeneous among taxa. Nearly all populations that surpassed high genotypic divergence thresholds were Passeriformes, and non-Passeriformes populations surpassed high phenotypic divergence thresholds more commonly than expected by chance. Overall, there was an apparent logarithmic increase in phenotypic divergence with respect to genetic divergence, suggesting the possibility that divergence among these lineages may initially be driven by divergent selection in this allopatric system. Also, genetic endemism was high among sampled islands. Higher taxonomy affected divergence in genotype and phenotype. Although broader lineage, genetic, phenotypic, and numeric sampling is needed to further explore heterogeneity among divergence processes and to accurately assess species-level diversity in these taxa, our results support the need for substantial taxonomic revisions among Philippine birds. The conservation implications are profound. PMID:27442510

  1. Integration of Genetic and Phenotypic Data in 48 Lineages of Philippine Birds Shows Heterogeneous Divergence Processes and Numerous Cryptic Species.

    PubMed

    Campbell, Kyle K; Braile, Thomas; Winker, Kevin

    2016-01-01

    The Philippine Islands are one of the most biologically diverse archipelagoes in the world. Current taxonomy, however, may underestimate levels of avian diversity and endemism in these islands. Although species limits can be difficult to determine among allopatric populations, quantitative methods for comparing phenotypic and genotypic data can provide useful metrics of divergence among populations and identify those that merit consideration for elevation to full species status. Using a conceptual approach that integrates genetic and phenotypic data, we compared populations among 48 species, estimating genetic divergence (p-distance) using the mtDNA marker ND2 and comparing plumage and morphometrics of museum study skins. Using conservative speciation thresholds, pairwise comparisons of genetic and phenotypic divergence suggested possible species-level divergences in more than half of the species studied (25 out of 48). In speciation process space, divergence routes were heterogeneous among taxa. Nearly all populations that surpassed high genotypic divergence thresholds were Passeriformes, and non-Passeriformes populations surpassed high phenotypic divergence thresholds more commonly than expected by chance. Overall, there was an apparent logarithmic increase in phenotypic divergence with respect to genetic divergence, suggesting the possibility that divergence among these lineages may initially be driven by divergent selection in this allopatric system. Also, genetic endemism was high among sampled islands. Higher taxonomy affected divergence in genotype and phenotype. Although broader lineage, genetic, phenotypic, and numeric sampling is needed to further explore heterogeneity among divergence processes and to accurately assess species-level diversity in these taxa, our results support the need for substantial taxonomic revisions among Philippine birds. The conservation implications are profound.

  2. Numerical Simulation of Natural Convection of a Nanofluid in an Inclined Heated Enclosure Using Two-Phase Lattice Boltzmann Method: Accurate Effects of Thermophoresis and Brownian Forces.

    PubMed

    Ahmed, Mahmoud; Eslamian, Morteza

    2015-12-01

    Laminar natural convection in differentially heated (β = 0°, where β is the inclination angle), inclined (β = 30° and 60°), and bottom-heated (β = 90°) square enclosures filled with a nanofluid is investigated, using a two-phase lattice Boltzmann simulation approach. The effects of the inclination angle on Nu number and convection heat transfer coefficient are studied. The effects of thermophoresis and Brownian forces which create a relative drift or slip velocity between the particles and the base fluid are included in the simulation. The effect of thermophoresis is considered using an accurate and quantitative formula proposed by the authors. Some of the existing results on natural convection are erroneous due to using wrong thermophoresis models or simply ignoring the effect. Here we show that thermophoresis has a considerable effect on heat transfer augmentation in laminar natural convection. Our non-homogenous modeling approach shows that heat transfer in nanofluids is a function of the inclination angle and Ra number. It also reveals some details of flow behavior which cannot be captured by single-phase models. The minimum heat transfer rate is associated with β = 90° (bottom-heated) and the maximum heat transfer rate occurs in an inclination angle which varies with the Ra number.

  3. Numerical Simulation of Natural Convection of a Nanofluid in an Inclined Heated Enclosure Using Two-Phase Lattice Boltzmann Method: Accurate Effects of Thermophoresis and Brownian Forces

    NASA Astrophysics Data System (ADS)

    Ahmed, Mahmoud; Eslamian, Morteza

    2015-07-01

    Laminar natural convection in differentially heated ( β = 0°, where β is the inclination angle), inclined ( β = 30° and 60°), and bottom-heated ( β = 90°) square enclosures filled with a nanofluid is investigated, using a two-phase lattice Boltzmann simulation approach. The effects of the inclination angle on Nu number and convection heat transfer coefficient are studied. The effects of thermophoresis and Brownian forces which create a relative drift or slip velocity between the particles and the base fluid are included in the simulation. The effect of thermophoresis is considered using an accurate and quantitative formula proposed by the authors. Some of the existing results on natural convection are erroneous due to using wrong thermophoresis models or simply ignoring the effect. Here we show that thermophoresis has a considerable effect on heat transfer augmentation in laminar natural convection. Our non-homogenous modeling approach shows that heat transfer in nanofluids is a function of the inclination angle and Ra number. It also reveals some details of flow behavior which cannot be captured by single-phase models. The minimum heat transfer rate is associated with β = 90° (bottom-heated) and the maximum heat transfer rate occurs in an inclination angle which varies with the Ra number.

  4. Exactification of the Poincaré asymptotic expansion of the Hankel integral: spectacularly accurate asymptotic expansions and non-asymptotic scales.

    PubMed

    Galapon, Eric A; Martinez, Kay Marie L

    2014-02-08

    We obtain an exactification of the Poincaré asymptotic expansion (PAE) of the Hankel integral, [Formula: see text] as [Formula: see text], using the distributional approach of McClure & Wong. We find that, for half-integer orders of the Bessel function, the exactified asymptotic series terminates, so that it gives an exact finite sum representation of the Hankel integral. For other orders, the asymptotic series does not terminate and is generally divergent, but is amenable to superasymptotic summation, i.e. by optimal truncation. For specific examples, we compare the accuracy of the optimally truncated asymptotic series owing to the McClure-Wong distributional method with owing to the Mellin-Barnes integral method. We find that the former is spectacularly more accurate than the latter, by, in some cases, more than 70 orders of magnitude for the same moderate value of b. Moreover, the exactification can lead to a resummation of the PAE when it is exact, with the resummed Poincaré series exhibiting again the same spectacular accuracy. More importantly, the distributional method may yield meaningful resummations that involve scales that are not asymptotic sequences.

  5. Predicting geomorphic evolution through integration of numerical-model scenarios and topographic/bathymetric-survey updates

    NASA Astrophysics Data System (ADS)

    Plant, N. G.; Long, J.; Dalyander, S.; Thompson, D.; Miselis, J. L.

    2013-12-01

    Natural resource and hazard management of barrier islands requires an understanding of geomorphic changes associated with long-term processes and storms. Uncertainty exists in understanding how long-term processes interact with the geomorphic changes caused by storms and the resulting perturbations of the long-term evolution trajectories. We use high-resolution data sets to initialize and correct high-fidelity numerical simulations of oceanographic forcing and resulting barrier island evolution. We simulate two years of observed storms to determine the individual and cumulative impacts of these events. Results are separated into cross-shore and alongshore components of sediment transport and compared with observed topographic and bathymetric changes during these time periods. The discrete island change induced by these storms is integrated with previous knowledge of long-term net alongshore sediment transport to project island evolution. The approach has been developed and tested using data collected at the Chandeleur Island chain off the coast of Louisiana (USA). The simulation time period included impacts from tropical and winter storms, as well as a human-induced perturbation associated with construction of a sand berm along the island shoreline. The predictions and observations indicated that storm and long-term processes both contribute to the migration, lowering, and disintegration of the artificial berm and natural island. Further analysis will determine the relative importance of cross-shore and alongshore sediment transport processes and the dominant time scales that drive each of these processes and subsequent island morphologic response.

  6. A novel approach to improve numerical weather prediction skills by using anomaly integration and historical data

    NASA Astrophysics Data System (ADS)

    Peng, Xindong; Che, Yuzhang; Chang, Jun

    2013-08-01

    Using the concept of anomaly integration and historical climate data, we have developed a novel operational framework to implement deterministic numerical weather prediction within 15 days. Real-case validation shows pronounced improvements in the forecasts of global geopotential heights in 20 out of 30 cases with the Community Atmosphere Model version 3.0. Seven other cases are marginally improved, and only three are deteriorated, in which all are ameliorated within the first-week period. The average of the 30 cases shows an obvious increase of anomaly correlation coefficient (ACC) and a decrease of root mean square error (RMSE) of the geopotential height over global, hemispherical, and tropical zones. Significant amelioration on tropical circulation is displayed within the first-week prediction. The forecasting skill is extended by 0.6 day in terms of days of the ACC greater than 0.6 for 500 hPa 30 case averaged geopotential height on global scale. The 30 case mean ACC and RMSE of 500 hPa temperature show the increment of 0.2 and -1.6 K, respectively, in the first-week prediction. In the case of January 2008, much more reasonable horizontal distribution and vertical structure are achieved in bias-corrected model geopotential height, temperature, relative humidity, and horizontal wind components in comparison to reanalysis data. In spite of a need for additional storage of historical modeling data, the new method does not increase computational costs and therefore is suitable for routine application.

  7. Solutions to the ellipsoidal Clairaut constant and the inverse geodetic problem by numerical integration

    NASA Astrophysics Data System (ADS)

    Sjöberg, L. E.

    2012-11-01

    We derive computational formulas for determining the Clairaut constant, i.e. the cosine of the maximum latitude of the geodesic arc, from two given points on the oblate ellipsoid of revolution. In all cases the Clairaut constant is unique. The inverse geodetic problem on the ellipsoid is to determine the geodesic arc between and the azimuths of the arc at the given points. We present the solution for the fixed Clairaut constant. If the given points are not(nearly) antipodal, each azimuth and location of the geodesic is unique, while for the fixed points in the ”antipodal region”, roughly within 36”.2 from the antipode, there are two geodesics mirrored in the equator and with complementary azimuths at each point. In the special case with the given points located at the poles of the ellipsoid, all meridians are geodesics. The special role played by the Clairaut constant and the numerical integration make this method different from others available in the literature.

  8. Numerical Modeling of 3-D Dynamics of Ultrasound Contrast Agent Microbubbles Using the Boundary Integral Method

    NASA Astrophysics Data System (ADS)

    Calvisi, Michael; Manmi, Kawa; Wang, Qianxi

    2014-11-01

    Ultrasound contrast agents (UCAs) are microbubbles stabilized with a shell typically of lipid, polymer, or protein and are emerging as a unique tool for noninvasive therapies ranging from gene delivery to tumor ablation. The nonspherical dynamics of contrast agents are thought to play an important role in both diagnostic and therapeutic applications, for example, causing the emission of subharmonic frequency components and enhancing the uptake of therapeutic agents across cell membranes and tissue interfaces. A three-dimensional model for nonspherical contrast agent dynamics based on the boundary integral method is presented. The effects of the encapsulating shell are approximated by adapting Hoff's model for thin-shell, spherical contrast agents to the nonspherical case. A high-quality mesh of the bubble surface is maintained by implementing a hybrid approach of the Lagrangian method and elastic mesh technique. Numerical analyses for the dynamics of UCAs in an infinite liquid and near a rigid wall are performed in parameter regimes of clinical relevance. The results show that the presence of a coating significantly reduces the oscillation amplitude and period, increases the ultrasound pressure amplitude required to incite jetting, and reduces the jet width and velocity.

  9. Accurate and fast computation of transmission cross coefficients

    NASA Astrophysics Data System (ADS)

    Apostol, Štefan; Hurley, Paul; Ionescu, Radu-Cristian

    2015-03-01

    Precise and fast computation of aerial images are essential. Typical lithographic simulators employ a Köhler illumination system for which aerial imagery is obtained using a large number of Transmission Cross Coefficients (TCCs). These are generally computed by a slow numerical evaluation of a double integral. We review the general framework in which the 2D imagery is solved and then propose a fast and accurate method to obtain the TCCs. We acquire analytical solutions and thus avoid the complexity-accuracy trade-off encountered with numerical integration. Compared to other analytical integration methods, the one presented is faster, more general and more tractable.

  10. Accurate identification of fastidious Gram-negative rods: integration of both conventional phenotypic methods and 16S rRNA gene analysis

    PubMed Central

    2013-01-01

    Background Accurate identification of fastidious Gram-negative rods (GNR) by conventional phenotypic characteristics is a challenge for diagnostic microbiology. The aim of this study was to evaluate the use of molecular methods, e.g., 16S rRNA gene sequence analysis for identification of fastidious GNR in the clinical microbiology laboratory. Results A total of 158 clinical isolates covering 20 genera and 50 species isolated from 1993 to 2010 were analyzed by comparing biochemical and 16S rRNA gene sequence analysis based identification. 16S rRNA gene homology analysis identified 148/158 (94%) of the isolates to species level, 9/158 (5%) to genus and 1/158 (1%) to family level. Compared to 16S rRNA gene sequencing as reference method, phenotypic identification correctly identified 64/158 (40%) isolates to species level, mainly Aggregatibacter aphrophilus, Cardiobacterium hominis, Eikenella corrodens, Pasteurella multocida, and 21/158 (13%) isolates correctly to genus level, notably Capnocytophaga sp.; 73/158 (47%) of the isolates were not identified or misidentified. Conclusions We herein propose an efficient strategy for accurate identification of fastidious GNR in the clinical microbiology laboratory by integrating both conventional phenotypic methods and 16S rRNA gene sequence analysis. We conclude that 16S rRNA gene sequencing is an effective means for identification of fastidious GNR, which are not readily identified by conventional phenotypic methods. PMID:23855986

  11. Recovery Act: An Integrated Experimental and Numerical Study: Developing a Reaction Transport Model that Couples Chemical Reactions of Mineral Dissolution/Precipitation with Spatial and Temporal Flow Variations.

    SciTech Connect

    Saar, Martin O.; Seyfried, Jr., William E.; Longmire, Ellen K.

    2016-06-24

    A total of 12 publications and 23 abstracts were produced as a result of this study. In particular, the compilation of a thermodynamic database utilizing consistent, current thermodynamic data is a major step toward accurately modeling multi-phase fluid interactions with solids. Existing databases designed for aqueous fluids did not mesh well with existing solid phase databases. Addition of a second liquid phase (CO2) magnifies the inconsistencies between aqueous and solid thermodynamic databases. Overall, the combination of high temperature and pressure lab studies (task 1), using a purpose built apparatus, and solid characterization (task 2), using XRCT and more developed technologies, allowed observation of dissolution and precipitation processes under CO2 reservoir conditions. These observations were combined with results from PIV experiments on multi-phase fluids (task 3) in typical flow path geometries. The results of the tasks 1, 2, and 3 were compiled and integrated into numerical models utilizing Lattice-Boltzmann simulations (task 4) to realistically model the physical processes and were ultimately folded into TOUGH2 code for reservoir scale modeling (task 5). Compilation of the thermodynamic database assisted comparisons to PIV experiments (Task 3) and greatly improved Lattice Boltzmann (Task 4) and TOUGH2 simulations (Task 5). PIV (Task 3) and experimental apparatus (Task 1) have identified problem areas in TOUGHREACT code. Additional lab experiments and coding work has been integrated into an improved numerical modeling code.

  12. Numerical methods for the simulation of complex multi-body flows with applications for the integrated Space Shuttle vehicle

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    1992-01-01

    The following papers are presented: (1) numerical methods for the simulation of complex multi-body flows with applications for the Integrated Space Shuttle vehicle; (2) a generalized scheme for 3-D hyperbolic grid generation; (3) collar grids for intersecting geometric components within the Chimera overlapped grid scheme; and (4) application of the Chimera overlapped grid scheme to simulation of Space Shuttle ascent flows.

  13. New methods for the numerical integration of ordinary differential equations and their application to the equations of motion of spacecraft

    NASA Technical Reports Server (NTRS)

    Banyukevich, A.; Ziolkovski, K.

    1975-01-01

    A number of hybrid methods for solving Cauchy problems are described on the basis of an evaluation of advantages of single and multiple-point numerical integration methods. The selection criterion is the principle of minimizing computer time. The methods discussed include the Nordsieck method, the Bulirsch-Stoer extrapolation method, and the method of recursive Taylor-Steffensen power series.

  14. Sull'Integrazione delle Strutture Numeriche nella Scuola dell'Obbligo (Integrating Numerical Structures in Mandatory School).

    ERIC Educational Resources Information Center

    Bonotto, C.

    1995-01-01

    Attempted to verify knowledge regarding decimal and rational numbers in children ages 10-14. Discusses how pupils can receive and assimilate extensions of the number system from natural numbers to decimals and fractions and later can integrate this extension into a single and coherent numerical structure. (Author/MKR)

  15. The determination of the dynamical flattening J2 and the mass of Saturn via improving the orbits by numerical integration.

    NASA Astrophysics Data System (ADS)

    Shen, Kaixian

    1990-12-01

    The orbits of Iapetus and Titan have been generated by numerical integration using Gauss-Jackson method, and fitted to 1414 astrometric observations of Iapetus-Titan. The fit yielded well-determined value of the dynamical flattening J2 of Saturn and the mass ration Saturn/Sun.

  16. Experimental and numerical in-plane displacement fields for determine the J-integral on a PMMA cracked specimen

    NASA Astrophysics Data System (ADS)

    Hedan, S.; Valle, V.; Cottron, M.

    2010-06-01

    Contrary to J-integral values calculated from the 2D numerical model, calculated J-integrals [1] in the 3D numerical and 3D experimental cases are not very close with J-integral used in the literature. We can note a problem of structure which allows three-dimensional effects surrounding the crack tip to be seen. The aim of this paper is to determine the zone where the Jintegral formulation of the literature is sufficient to estimate the energy release rate (G) for the 3D cracked structure. For that, a numerical model based on the finite element method and an experimental setup are used. A grid method is adapted to experimentally determine the in-plane displacement fields around a crack tip in a Single-Edge-Notch (SEN) tensile polymer (PMMA) specimen. This indirect method composed of experimental in-plane displacement fields and of 2 theoretical formulations, allows the experimental J-integral on the free-surface to be determined and the results obtaining by the 3D numerical simulations to be confirmed.

  17. A uniformly accurate multiscale time integrator spectral method for the Klein-Gordon-Zakharov system in the high-plasma-frequency limit regime

    NASA Astrophysics Data System (ADS)

    Bao, Weizhu; Zhao, Xiaofei

    2016-12-01

    A multiscale time integrator sine pseudospectral (MTI-SP) method is presented for discretizing the Klein-Gordon-Zakharov (KGZ) system with a dimensionless parameter 0 < ε ≤ 1, which is inversely proportional to the plasma frequency. In the high-plasma-frequency limit regime, i.e. 0 < ε ≪ 1, the solution of the KGZ system propagates waves with amplitude at O (1) and wavelength at O (ε2) in time and O (1) in space, which causes significantly numerical burdens due to the high oscillation in time. The main idea of the numerical method is to carry out a multiscale decomposition by frequency (MDF) to the electric field component of the solution at each time step and then apply the sine pseudospectral discretization for spatial derivatives followed by using the exponential wave integrator in phase space for integrating the MDF and the equation of the ion density component. The method is explicit and easy to be implemented. Extensive numerical results show that the MTI-SP method converges uniformly and optimally in space with exponential convergence rate if the solution is smooth, and uniformly in time with linear convergence rate at O (τ) for ε ∈ (0 , 1 ] with τ time step size and optimally with quadratic convergence rate at O (τ2) in the regime when either ε = O (1) or 0 < ε ≤ τ. Thus the meshing strategy requirement (or ε-scalability) of the MTI-SP for the KGZ system in the high-plasma-frequency limit regime is τ = O (1) and h = O (1) for 0 < ε ≪ 1, which is significantly better than classical methods in the literatures. Finally, we apply the MTI-SP method to study the convergence rates of the KGZ system to its limiting models in the high-plasma-frequency limit and the interactions of bright solitons of the KGZ system, and to identify certain parameter regimes that the solution of the KGZ system will be blow-up in one dimension.

  18. Numerical evaluation of multi-loop integrals for arbitrary kinematics with SecDec 2.0

    NASA Astrophysics Data System (ADS)

    Borowka, Sophia; Carter, Jonathon; Heinrich, Gudrun

    2013-02-01

    We present the program SecDec 2.0, which contains various new features. First, it allows the numerical evaluation of multi-loop integrals with no restriction on the kinematics. Dimensionally regulated ultraviolet and infrared singularities are isolated via sector decomposition, while threshold singularities are handled by a deformation of the integration contour in the complex plane. As an application, we present numerical results for various massive two-loop four-point diagrams. SecDec 2.0 also contains new useful features for the calculation of more general parameter integrals, related for example to phase space integrals. Program summaryProgram title: SecDec 2.0 Catalogue identifier: AEIR_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIR_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 156829 No. of bytes in distributed program, including test data, etc.: 2137907 Distribution format: tar.gz Programming language: Wolfram Mathematica, Perl, Fortran/C++. Computer: From a single PC to a cluster, depending on the problem. Operating system: Unix, Linux. RAM: Depending on the complexity of the problem Classification: 4.4, 5, 11.1. Catalogue identifier of previous version: AEIR_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182(2011)1566 Does the new version supersede the previous version?: Yes Nature of problem: Extraction of ultraviolet and infrared singularities from parametric integrals appearing in higher order perturbative calculations in gauge theories. Numerical integration in the presence of integrable singularities (e.g., kinematic thresholds). Solution method: Algebraic extraction of singularities in dimensional regularization using iterated sector decomposition. This leads to a Laurent series in the dimensional regularization

  19. Numerical simulation of particulate flows using a hybrid of finite difference and boundary integral methods

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Amitabh; Kesarkar, Tejas

    2016-10-01

    A combination of finite difference (FD) and boundary integral (BI) methods is used to formulate an efficient solver for simulating unsteady Stokes flow around particles. The two-dimensional (2D) unsteady Stokes equation is being solved on a Cartesian grid using a second order FD method, while the 2D steady Stokes equation is being solved near the particle using BI method. The two methods are coupled within the viscous boundary layer, a few FD grid cells away from the particle, where solutions from both FD and BI methods are valid. We demonstrate that this hybrid method can be used to accurately solve for the flow around particles with irregular shapes, even though radius of curvature of the particle surface is not resolved by the FD grid. For dilute particle concentrations, we construct a virtual envelope around each particle and solve the BI problem for the flow field located between the envelope and the particle. The BI solver provides velocity boundary condition to the FD solver at "boundary" nodes located on the FD grid, adjacent to the particles, while the FD solver provides the velocity boundary condition to the BI solver at points located on the envelope. The coupling between FD method and BI method is implicit at every time step. This method allows us to formulate an O (N ) scheme for dilute suspensions, where N is the number of particles. For semidilute suspensions, where particles may cluster, an envelope formation method has been formulated and implemented, which enables solving the BI problem for each individual particle cluster, allowing efficient simulation of hydrodynamic interaction between particles even when they are in close proximity. The method has been validated against analytical results for flow around a periodic array of cylinders and for Jeffrey orbit of a moving ellipse in shear flow. Simulation of multiple force-free irregular shaped particles in the presence of shear in a 2D slit flow has been conducted to demonstrate the robustness of

  20. Numerical simulation of particulate flows using a hybrid of finite difference and boundary integral methods.

    PubMed

    Bhattacharya, Amitabh; Kesarkar, Tejas

    2016-10-01

    A combination of finite difference (FD) and boundary integral (BI) methods is used to formulate an efficient solver for simulating unsteady Stokes flow around particles. The two-dimensional (2D) unsteady Stokes equation is being solved on a Cartesian grid using a second order FD method, while the 2D steady Stokes equation is being solved near the particle using BI method. The two methods are coupled within the viscous boundary layer, a few FD grid cells away from the particle, where solutions from both FD and BI methods are valid. We demonstrate that this hybrid method can be used to accurately solve for the flow around particles with irregular shapes, even though radius of curvature of the particle surface is not resolved by the FD grid. For dilute particle concentrations, we construct a virtual envelope around each particle and solve the BI problem for the flow field located between the envelope and the particle. The BI solver provides velocity boundary condition to the FD solver at "boundary" nodes located on the FD grid, adjacent to the particles, while the FD solver provides the velocity boundary condition to the BI solver at points located on the envelope. The coupling between FD method and BI method is implicit at every time step. This method allows us to formulate an O(N) scheme for dilute suspensions, where N is the number of particles. For semidilute suspensions, where particles may cluster, an envelope formation method has been formulated and implemented, which enables solving the BI problem for each individual particle cluster, allowing efficient simulation of hydrodynamic interaction between particles even when they are in close proximity. The method has been validated against analytical results for flow around a periodic array of cylinders and for Jeffrey orbit of a moving ellipse in shear flow. Simulation of multiple force-free irregular shaped particles in the presence of shear in a 2D slit flow has been conducted to demonstrate the robustness of

  1. A quartic B-spline based explicit time integration scheme for structural dynamics with controllable numerical dissipation

    NASA Astrophysics Data System (ADS)

    Wen, W. B.; Duan, S. Y.; Yan, J.; Ma, Y. B.; Wei, K.; Fang, D. N.

    2017-03-01

    An explicit time integration scheme based on quartic B-splines is presented for solving linear structural dynamics problems. The scheme is of a one-parameter family of schemes where free algorithmic parameter controls stability, accuracy and numerical dispersion. The proposed scheme possesses at least second-order accuracy and at most third-order accuracy. A 2D wave problem is analyzed to demonstrate the effectiveness of the proposed scheme in reducing high-frequency modes and retaining low-frequency modes. Except for general structural dynamics, the proposed scheme can be used effectively for wave propagation problems in which numerical dissipation is needed to reduce spurious oscillations.

  2. SINDA'85/FLUINT - SYSTEMS IMPROVED NUMERICAL DIFFERENCING ANALYZER AND FLUID INTEGRATOR (CONVEX VERSION)

    NASA Technical Reports Server (NTRS)

    Cullimore, B.

    1994-01-01

    SINDA, the Systems Improved Numerical Differencing Analyzer, is a software system for solving lumped parameter representations of physical problems governed by diffusion-type equations. SINDA was originally designed for analyzing thermal systems represented in electrical analog, lumped parameter form, although its use may be extended to include other classes of physical systems which can be modeled in this form. As a thermal analyzer, SINDA can handle such interrelated phenomena as sublimation, diffuse radiation within enclosures, transport delay effects, and sensitivity analysis. FLUINT, the FLUid INTegrator, is an advanced one-dimensional fluid analysis program that solves arbitrary fluid flow networks. The working fluids can be single phase vapor, single phase liquid, or two phase. The SINDA'85/FLUINT system permits the mutual influences of thermal and fluid problems to be analyzed. The SINDA system consists of a programming language, a preprocessor, and a subroutine library. The SINDA language is designed for working with lumped parameter representations and finite difference solution techniques. The preprocessor accepts programs written in the SINDA language and converts them into standard FORTRAN. The SINDA library consists of a large number of FORTRAN subroutines that perform a variety of commonly needed actions. The use of these subroutines can greatly reduce the programming effort required to solve many problems. A complete run of a SINDA'85/FLUINT model is a four step process. First, the user's desired model is run through the preprocessor which writes out data files for the processor to read and translates the user's program code. Second, the translated code is compiled. The third step requires linking the user's code with the processor library. Finally, the processor is executed. SINDA'85/FLUINT program features include 20,000 nodes, 100,000 conductors, 100 thermal submodels, and 10 fluid submodels. SINDA'85/FLUINT can also model two phase flow

  3. On testing a subroutine for the numerical integration of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Krogh, F. T.

    1973-01-01

    This paper discusses how to numerically test a subroutine for the solution of ordinary differential equations. Results obtained with a variable order Adams method are given for eleven simple test cases.-

  4. Assessing the bio-mitigation effect of integrated multi-trophic aquaculture on marine environment by a numerical approach.

    PubMed

    Zhang, Junbo; Kitazawa, Daisuke

    2016-09-15

    With increasing concern over the aquatic environment in marine culture, the integrated multi-trophic aquaculture (IMTA) has received extensive attention in recent years. A three-dimensional numerical ocean model is developed to explore the negative impacts of aquaculture wastes and assess the bio-mitigation effect of IMTA systems on marine environments. Numerical results showed that the concentration of surface phytoplankton could be controlled by planting seaweed (a maximum reduction of 30%), and the percentage change in the improvement of bottom dissolved oxygen concentration increased to 35% at maximum due to the ingestion of organic wastes by sea cucumbers. Numerical simulations indicate that seaweeds need to be harvested in a timely manner for maximal absorption of nutrients, and the initial stocking density of sea cucumbers >3.9 individuals m(-2) is preferred to further eliminate the organic wastes sinking down to the sea bottom.

  5. Numerical Treatment of Differential and Integral Equations by the P and H-P Versions of the Finite Element Method

    DTIC Science & Technology

    1992-01-01

    mathematical papers which describe various locking effects and analyze methods (mainly mixed methods ) to overcome it. However, the treatment in these...finite element method in various areas, such as the numerical approximation of three-dimensional PDEs anu integral equations, the investigation of mixed ... methods for these versions and, most importantly, the uniform approximation of parameter-dependent problems by these versions. By the p version, we

  6. On time discretizations for spectral methods. [numerical integration of Fourier and Chebyshev methods for dynamic partial differential equations

    NASA Technical Reports Server (NTRS)

    Gottlieb, D.; Turkel, E.

    1980-01-01

    New methods are introduced for the time integration of the Fourier and Chebyshev methods of solution for dynamic differential equations. These methods are unconditionally stable, even though no matrix inversions are required. Time steps are chosen by accuracy requirements alone. For the Fourier method both leapfrog and Runge-Kutta methods are considered. For the Chebyshev method only Runge-Kutta schemes are tested. Numerical calculations are presented to verify the analytic results. Applications to the shallow water equations are presented.

  7. Accurate numerical simulation of the far-field tsunami caused by the 2011 Tohoku earthquake, including the effects of Boussinesq dispersion, seawater density stratification, elastic loading, and gravitational potential change

    NASA Astrophysics Data System (ADS)

    Baba, Toshitaka; Allgeyer, Sebastien; Hossen, Jakir; Cummins, Phil R.; Tsushima, Hiroaki; Imai, Kentaro; Yamashita, Kei; Kato, Toshihiro

    2017-03-01

    In this study, we considered the accurate calculation of far-field tsunami waveforms by using the shallow water equations and accounting for the effects of Boussinesq dispersion, seawater density stratification, elastic loading, and gravitational potential change in a finite difference scheme. By comparing numerical simulations that included and excluded each of these effects with the observed waveforms of the 2011 Tohoku tsunami, we found that all of these effects are significant and resolvable in the far field by the current generation of deep ocean-bottom pressure gauges. Our calculations using previously published, high-resolution models of the 2011 Tohoku tsunami source exhibited excellent agreement with the observed waveforms to a degree that has previously been possible only with near-field or regional observations. We suggest that the ability to model far-field tsunamis with high accuracy has important implications for tsunami source and hazard studies.

  8. An integrated approach for non-periodic dynamic response prediction of complex structures: Numerical and experimental analysis

    NASA Astrophysics Data System (ADS)

    Rahneshin, Vahid; Chierichetti, Maria

    2016-09-01

    In this paper, a combined numerical and experimental method, called Extended Load Confluence Algorithm, is presented to accurately predict the dynamic response of non-periodic structures when little or no information about the applied loads is available. This approach, which falls into the category of Shape Sensing methods, inputs limited experimental information acquired from sensors to a mapping algorithm that predicts the response at unmeasured locations. The proposed algorithm consists of three major cores: an experimental core for data acquisition, a numerical core based on Finite Element Method for modeling the structure, and a mapping algorithm that improves the numerical model based on a modal approach in the frequency domain. The robustness and precision of the proposed algorithm are verified through numerical and experimental examples. The results of this paper demonstrate that without a precise knowledge of the loads acting on the structure, the dynamic behavior of the system can be predicted in an effective and precise manner after just a few iterations.

  9. Theory of axially symmetric cusped focusing: numerical evaluation of a Bessoid integral by an adaptive contour algorithm

    NASA Astrophysics Data System (ADS)

    Kirk, N. P.; Connor, J. N. L.; Curtis, P. R.; Hobbs, C. A.

    2000-07-01

    A numerical procedure for the evaluation of the Bessoid canonical integral J({x,y}) is described. J({x,y}) is defined, for x and y real, by eq1 where J0(·) is a Bessel function of order zero. J({x,y}) plays an important role in the description of cusped focusing when there is axial symmetry present. It arises in the diffraction theory of aberrations, in the design of optical instruments and of highly directional microwave antennas and in the theory of image formation for high-resolution electron microscopes. The numerical procedure replaces the integration path along the real t axis with a more convenient contour in the complex t plane, thereby rendering the oscillatory integrand more amenable to numerical quadrature. The computations use a modified version of the CUSPINT computer code (Kirk et al 2000 Comput. Phys. Commun. at press), which evaluates the cuspoid canonical integrals and their first-order partial derivatives. Plots and tables of J({x,y}) and its zeros are presented for the grid -8.0≤x≤8.0 and -8.0≤y≤8.0. Some useful series expansions of J({x,y}) are also derived.

  10. Integration Preferences of Wildtype AAV-2 for Consensus Rep-Binding Sites at Numerous Loci in the Human Genome

    PubMed Central

    Hüser, Daniela; Gogol-Döring, Andreas; Lutter, Timo; Weger, Stefan; Winter, Kerstin; Hammer, Eva-Maria; Cathomen, Toni; Reinert, Knut; Heilbronn, Regine

    2010-01-01

    Adeno-associated virus type 2 (AAV) is known to establish latency by preferential integration in human chromosome 19q13.42. The AAV non-structural protein Rep appears to target a site called AAVS1 by simultaneously binding to Rep-binding sites (RBS) present on the AAV genome and within AAVS1. In the absence of Rep, as is the case with AAV vectors, chromosomal integration is rare and random. For a genome-wide survey of wildtype AAV integration a linker-selection-mediated (LSM)-PCR strategy was designed to retrieve AAV-chromosomal junctions. DNA sequence determination revealed wildtype AAV integration sites scattered over the entire human genome. The bioinformatic analysis of these integration sites compared to those of rep-deficient AAV vectors revealed a highly significant overrepresentation of integration events near to consensus RBS. Integration hotspots included AAVS1 with 10% of total events. Novel hotspots near consensus RBS were identified on chromosome 5p13.3 denoted AAVS2 and on chromsome 3p24.3 denoted AAVS3. AAVS2 displayed seven independent junctions clustered within only 14 bp of a consensus RBS which proved to bind Rep in vitro similar to the RBS in AAVS3. Expression of Rep in the presence of rep-deficient AAV vectors shifted targeting preferences from random integration back to the neighbourhood of consensus RBS at hotspots and numerous additional sites in the human genome. In summary, targeted AAV integration is not as specific for AAVS1 as previously assumed. Rather, Rep targets AAV to integrate into open chromatin regions in the reach of various, consensus RBS homologues in the human genome. PMID:20628575

  11. Multiple piezo-patch energy harvesters integrated to a thin plate with AC-DC conversion: analytical modeling and numerical validation

    NASA Astrophysics Data System (ADS)

    Aghakhani, Amirreza; Basdogan, Ipek; Erturk, Alper

    2016-04-01

    Plate-like components are widely used in numerous automotive, marine, and aerospace applications where they can be employed as host structures for vibration based energy harvesting. Piezoelectric patch harvesters can be easily attached to these structures to convert the vibrational energy to the electrical energy. Power output investigations of these harvesters require accurate models for energy harvesting performance evaluation and optimization. Equivalent circuit modeling of the cantilever-based vibration energy harvesters for estimation of electrical response has been proposed in recent years. However, equivalent circuit formulation and analytical modeling of multiple piezo-patch energy harvesters integrated to thin plates including nonlinear circuits has not been studied. In this study, equivalent circuit model of multiple parallel piezoelectric patch harvesters together with a resistive load is built in electronic circuit simulation software SPICE and voltage frequency response functions (FRFs) are validated using the analytical distributedparameter model. Analytical formulation of the piezoelectric patches in parallel configuration for the DC voltage output is derived while the patches are connected to a standard AC-DC circuit. The analytic model is based on the equivalent load impedance approach for piezoelectric capacitance and AC-DC circuit elements. The analytic results are validated numerically via SPICE simulations. Finally, DC power outputs of the harvesters are computed and compared with the peak power amplitudes in the AC output case.

  12. Efficient and accurate computation of the incomplete Airy functions

    NASA Technical Reports Server (NTRS)

    Constantinides, E. D.; Marhefka, R. J.

    1993-01-01

    The incomplete Airy integrals serve as canonical functions for the uniform ray optical solutions to several high-frequency scattering and diffraction problems that involve a class of integrals characterized by two stationary points that are arbitrarily close to one another or to an integration endpoint. Integrals with such analytical properties describe transition region phenomena associated with composite shadow boundaries. An efficient and accurate method for computing the incomplete Airy functions would make the solutions to such problems useful for engineering purposes. In this paper a convergent series solution for the incomplete Airy functions is derived. Asymptotic expansions involving several terms are also developed and serve as large argument approximations. The combination of the series solution with the asymptotic formulae provides for an efficient and accurate computation of the incomplete Airy functions. Validation of accuracy is accomplished using direct numerical integration data.

  13. Integration of numerical modeling and observations for the Gulf of Naples monitoring network

    NASA Astrophysics Data System (ADS)

    Iermano, I.; Uttieri, M.; Zambianchi, E.; Buonocore, B.; Cianelli, D.; Falco, P.; Zambardino, G.

    2012-04-01

    Lethal effects of mineral oils on fragile marine and coastal ecosystems are now well known. Risks and damages caused by a maritime accident can be reduced with the help of better forecasts and efficient monitoring systems. The MED project TOSCA (Tracking Oil Spills and Coastal Awareness Network), which gathers 13 partners from 4 Mediterranean countries, has been designed to help create a better response system to maritime accidents. Through the construction of an observational network, based on state of the art technology (HF radars and drifters), TOSCA provides real-time observations and forecasts of the Mediterranean coastal marine environmental conditions. The system is installed and assessed in five test sites on the coastal areas of oil spill outlets (Eastern Mediterranean) and on high traffic areas (Western Mediterranean). The Gulf of Naples, a small semi-closed basin opening to the Tyrrhenian Sea is one of the five test-sites. It is of particular interest from both the environmental point of view, due to peculiar ecosystem properties in the area, and because it sustains important touristic and commercial activities. Currently the Gulf of Naples monitoring network is represented by five automatic weather stations distributed along the coasts of the Gulf, one weather radar, two tide gauges, one waverider buoy, and moored physical, chemical and bio-optical instrumentation. In addition, a CODAR-SeaSonde HF coastal radar system composed of three antennas is located in Portici, Massa Lubrense and Castellammare. The system provides hourly data of surface currents over the entire Gulf with a 1km spatial resolution. A numerical modeling implementation based on Regional Ocean Modeling System (ROMS) is actually integrated in the Gulf of Naples monitoring network. ROMS is a 3-D, free-surface, hydrostatic, primitive equation, finite difference ocean model. In our configuration, the model has high horizontal resolution (250m), and 30 sigma levels in the vertical. Thanks

  14. Construction of an extended invariant for an arbitrary ordinary differential equation with its development in a numerical integration algorithm.

    PubMed

    Fukuda, Ikuo; Nakamura, Haruki

    2006-02-01

    For an arbitrary ordinary differential equation (ODE), a scheme for constructing an extended ODE endowed with a time-invariant function is here proposed. This scheme enables us to examine the accuracy of the numerical integration of an ODE that may itself have had no invariant. These quantities are constructed by referring to the Nosé-Hoover molecular dynamics equation and its related conserved quantity. By applying this procedure to several molecular dynamics equations, the conventional conserved quantity individually defined in each dynamics can be reproduced in a uniform, generalized way; our concept allows a transparent outlook underlying these quantities and ideas. Developing the technique, for a certain class of ODEs we construct a numerical integrator that is not only explicit and symmetric, but preserves a unit Jacobian for a suitably defined extended ODE, which also provides an invariant. Our concept is thus to simply build a divergence-free extended ODE whose solution is just a lift-up of the original ODE, and to constitute an efficient integrator that preserves the phase-space volume on the extended system. We present precise discussions about the general mathematical properties of the integrator and provide specific conditions that should be incorporated for practical applications.

  15. cuSwift --- a suite of numerical integration methods for modelling planetary systems implemented in C/CUDA

    NASA Astrophysics Data System (ADS)

    Hellmich, S.; Mottola, S.; Hahn, G.; Kührt, E.; Hlawitschka, M.

    2014-07-01

    Simulations of dynamical processes in planetary systems represent an important tool for studying the orbital evolution of the systems [1--3]. Using modern numerical integration methods, it is possible to model systems containing many thousands of objects over timescales of several hundred million years. However, in general, supercomputers are needed to get reasonable simulation results in acceptable execution times [3]. To exploit the ever-growing computation power of Graphics Processing Units (GPUs) in modern desktop computers, we implemented cuSwift, a library of numerical integration methods for studying long-term dynamical processes in planetary systems. cuSwift can be seen as a re-implementation of the famous SWIFT integrator package written by Hal Levison and Martin Duncan. cuSwift is written in C/CUDA and contains different integration methods for various purposes. So far, we have implemented three algorithms: a 15th-order Radau integrator [4], the Wisdom-Holman Mapping (WHM) integrator [5], and the Regularized Mixed Variable Symplectic (RMVS) Method [6]. These algorithms treat only the planets as mutually gravitationally interacting bodies whereas asteroids and comets (or other minor bodies of interest) are treated as massless test particles which are gravitationally influenced by the massive bodies but do not affect each other or the massive bodies. The main focus of this work is on the symplectic methods (WHM and RMVS) which use a larger time step and thus are capable of integrating many particles over a large time span. As an additional feature, we implemented the non-gravitational Yarkovsky effect as described by M. Brož [7]. With cuSwift, we show that the use of modern GPUs makes it possible to speed up these methods by more than one order of magnitude compared to the single-core CPU implementation, thereby enabling modest workstation computers to perform long-term dynamical simulations. We use these methods to study the influence of the Yarkovsky

  16. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    SciTech Connect

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; Beerli, Peter; Zeng, Xiankui; Lu, Dan; Tao, Yuezan

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamic integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.

  17. Mixing-to-eruption timescales: an integrated model combining numerical simulations and high-temperature experiments with natural melts

    NASA Astrophysics Data System (ADS)

    Montagna, Chiara; Perugini, Diego; De Campos, Christina; Longo, Antonella; Dingwell, Donald Bruce; Papale, Paolo

    2015-04-01

    Arrival of magma from depth into shallow reservoirs and associated mixing processes have been documented as possible triggers of explosive eruptions. Quantifying the timing from beginning of mixing to eruption is of fundamental importance in volcanology in order to put constraints about the possible onset of a new eruption. Here we integrate numerical simulations and high-temperature experiment performed with natural melts with the aim to attempt identifying the mixing-to-eruption timescales. We performed two-dimensional numerical simulations of the arrival of gas-rich magmas into shallow reservoirs. We solve the fluid dynamics for the two interacting magmas evaluating the space-time evolution of the physical properties of the mixture. Convection and mingling develop quickly into the chamber and feeding conduit/dyke. Over time scales of hours, the magmas in the reservoir appear to have mingled throughout, and convective patterns become harder to identify. High-temperature magma mixing experiments have been performed using a centrifuge and using basaltic and phonolitic melts from Campi Flegrei (Italy) as initial end-members. Concentration Variance Decay (CVD), an inevitable consequence of magma mixing, is exponential with time. The rate of CVD is a powerful new geochronometer for the time from mixing to eruption/quenching. The mingling-to-eruption time of three explosive volcanic eruptions from Campi Flegrei (Italy) yield durations on the order of tens of minutes. These results are in perfect agreement with the numerical simulations that suggest a maximum mixing time of a few hours to obtain a hybrid mixture. We show that integration of numerical simulation and high-temperature experiments can provide unprecedented results about mixing processes in volcanic systems. The combined application of numerical simulations and CVD geochronometer to the eruptive products of active volcanoes could be decisive for the preparation of hazard mitigation during volcanic unrest.

  18. Numerical solution of linear and nonlinear Fredholm integral equations by using weighted mean-value theorem.

    PubMed

    Altürk, Ahmet

    2016-01-01

    Mean value theorems for both derivatives and integrals are very useful tools in mathematics. They can be used to obtain very important inequalities and to prove basic theorems of mathematical analysis. In this article, a semi-analytical method that is based on weighted mean-value theorem for obtaining solutions for a wide class of Fredholm integral equations of the second kind is introduced. Illustrative examples are provided to show the significant advantage of the proposed method over some existing techniques.

  19. Application of a numerical differencing analyzer computer program to a Modular Integrated Utility System

    NASA Technical Reports Server (NTRS)

    Brandli, A. E.; Donham, C. F.

    1974-01-01

    This paper describes the application of a numerical differencing analyzer computer program to the thermal analyzation of a MIUS model. The MIUS model which was evaluated is one which would be required to support a 648-unit Garden Apartment Complex. This computer program was capable of predicting the thermal performance of this MIUS from the impressed electrical, heating, and cooling loads.

  20. A modified seventh order two step hybrid method for the numerical integration of oscillatory problems

    NASA Astrophysics Data System (ADS)

    Kalogiratou, Z.; Monovasilis, Th.; Simos, T. E.

    2016-12-01

    In this work we consider trigonometrically fitted two step hybrid methods for the numerical solution of second-order initial value problems. We follow the approach of Simos and derive trigonometrically fitting conditions for methods with five stages. As an example we modify a seventh order method and apply to three well known oscillatory problems.

  1. A Numerical Methods Course Based on B-Learning: Integrated Learning Design and Follow Up

    ERIC Educational Resources Information Center

    Cepeda, Francisco Javier Delgado

    2013-01-01

    Information and communication technologies advance continuously, providing a real support for learning processes. Learning technologies address areas which previously have corresponded to face-to-face learning, while mobile resources are having a growing impact on education. Numerical Methods is a discipline and profession based on technology. In…

  2. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  3. Numerical integration of the master equation in some models of stochastic epidemiology.

    PubMed

    Jenkinson, Garrett; Goutsias, John

    2012-01-01

    The processes by which disease spreads in a population of individuals are inherently stochastic. The master equation has proven to be a useful tool for modeling such processes. Unfortunately, solving the master equation analytically is possible only in limited cases (e.g., when the model is linear), and thus numerical procedures or approximation methods must be employed. Available approximation methods, such as the system size expansion method of van Kampen, may fail to provide reliable solutions, whereas current numerical approaches can induce appreciable computational cost. In this paper, we propose a new numerical technique for solving the master equation. Our method is based on a more informative stochastic process than the population process commonly used in the literature. By exploiting the structure of the master equation governing this process, we develop a novel technique for calculating the exact solution of the master equation--up to a desired precision--in certain models of stochastic epidemiology. We demonstrate the potential of our method by solving the master equation associated with the stochastic SIR epidemic model. MATLAB software that implements the methods discussed in this paper is freely available as Supporting Information S1.

  4. Numerical modelling of qualitative behaviour of solutions to convolution integral equations

    NASA Astrophysics Data System (ADS)

    Ford, Neville J.; Diogo, Teresa; Ford, Judith M.; Lima, Pedro

    2007-08-01

    We consider the qualitative behaviour of solutions to linear integral equations of the formwhere the kernel k is assumed to be either integrable or of exponential type. After a brief review of the well-known Paley-Wiener theory we give conditions that guarantee that exact and approximate solutions of (1) are of a specific exponential type. As an example, we provide an analysis of the qualitative behaviour of both exact and approximate solutions of a singular Volterra equation with infinitely many solutions. We show that the approximations of neighbouring solutions exhibit the correct qualitative behaviour.

  5. Diffraction in a stratified region of a high numerical aperture Fresnel zone plate: a simple and rigorous integral representation.

    PubMed

    Zhang, Yaoju; Huang, Xiangjun; Zhang, Dong; An, Hongchang; Dai, Yuxing

    2015-03-23

    An algorithm for calculating the field distribution of a high numerical aperture Fresnel zone plate (FZP) in stratified media is presented, which is based on the vector angular spectrum method. The diffraction problem of FZP is solved for the case of a multilayer film with planar interfaces perpendicular to the optical axis. The solution is obtained in a rigorous mathematical manner and it satisfies the homogeneous wave equations. The electric strength vector of the transmitted and reflected field in the multilayer media is obtained for any polarized beam normally incident onto a binary phase circular FZP. For radially-, azimuthally- and linearly-polarized beam, the electric field in the focal region can be simplified as double or single integral, which can be readily used for numerical computation.

  6. A stochastic regulator for integrated communication and control systems. I - Formulation of control law. II - Numerical analysis and simulation

    NASA Technical Reports Server (NTRS)

    Liou, Luen-Woei; Ray, Asok

    1991-01-01

    A state feedback control law for integrated communication and control systems (ICCS) is formulated by using the dynamic programming and optimality principle on a finite-time horizon. The control law is derived on the basis of a stochastic model of the plant which is augmented in state space to allow for the effects of randomly varying delays in the feedback loop. A numerical procedure for synthesizing the control parameters is then presented, and the performance of the control law is evaluated by simulating the flight dynamics model of an advanced aircraft. Finally, recommendations for future work are made.

  7. Study of electromagnetic scattering from randomly rough ocean-like surfaces using integral-equation-based numerical technique

    NASA Astrophysics Data System (ADS)

    Toporkov, Jakov V.

    A numerical study of electromagnetic scattering by one-dimensional perfectly conducting randomly rough surfaces with an ocean-like Pierson-Moskowitz spectrum is presented. Simulations are based on solving the Magnetic Field Integral Equation (MFIE) using the numerical technique called the Method of Ordered Multiple Interactions (MOMI). The study focuses on the application and validation of this integral equation-based technique to scattering at low grazing angles and considers other aspects of numerical simulations crucial to obtaining correct results in the demanding low grazing angle regime. It was found that when the MFIE propagator matrix is used with zeros on its diagonal (as has often been the practice) the results appear to show an unexpected sensitivity to the sampling interval. This sensitivity is especially pronounced in the case of horizontal polarization and at low grazing angles. We show---both numerically and analytically---that the problem lies not with the particular numerical technique used (MOMI) but rather with how the MFIE is discretized. It is demonstrated that the inclusion of so-called "curvature terms" (terms that arise from a correct discretization procedure and are proportional to the second surface derivative) in the diagonal of the propagator matrix eliminates the problem completely. A criterion for the choice of the sampling interval used in discretizing the MFIE based on both electromagnetic wavelength and the surface spectral cutoff is established. The influence of the surface spectral cutoff value on the results of scattering simulations is investigated and a recommendation for the choice of this spectral cutoff for numerical simulation purposes is developed. Also studied is the applicability of the tapered incident field at low grazing incidence angles. It is found that when a Gaussian-like taper with fixed beam waist is used there is a characteristic pattern (anomalous jump) in the calculated average backscattered cross section at

  8. Integrating Laboratory and Numerical Decompression Experiments to Investigate Fluid Dynamics into the Conduit

    NASA Astrophysics Data System (ADS)

    Spina, Laura; Colucci, Simone; De'Michieli Vitturi, Mattia; Scheu, Bettina; Dingwell, Donald Bruce

    2015-04-01

    The study of the fluid dynamics of magmatic melts into the conduit, where direct observations are unattainable, was proven to be strongly enhanced by multiparametric approaches. Among them, the coupling of numerical modeling with laboratory experiments represents a fundamental tool of investigation. Indeed, the experimental approach provide invaluable data to validate complex multiphase codes. We performed decompression experiments in a shock tube system, using pure silicon oil as a proxy for the basaltic melt. A range of viscosity comprised between 1 and 1000 Pa s was investigated. The samples were saturated with Argon for 72h at 10MPa, before being slowly decompressed to atmospheric pressure. The evolution of the analogue magmatic system was monitored through a high speed camera and pressure sensors, located into the analogue conduit. The experimental decompressions have then been reproduced numerically using a multiphase solver based on OpenFOAM framework. The original compressible multiphase Openfoam solver twoPhaseEulerFoam was extended to take into account the multicomponent nature of the fluid mixtures (liquid and gas) and the phase transition. According to the experimental conditions, the simulations were run with values of fluid viscosity ranging from 1 to 1000 Pa s. The sensitivity of the model has been tested for different values of the parameters t and D, representing respectively the relaxation time for gas exsolution and the average bubble diameter, required by the Gidaspow drag model. Valuable range of values for both parameters are provided from experimental observations, i.e. bubble nucleation time and bubble size distribution at a given pressure. The comparison of video images with the outcomes of the numerical models was performed by tracking the evolution of the gas volume fraction through time. Therefore, we were able to calibrate the parameter of the model by laboratory results, and to track the fluid dynamics of experimental decompression.

  9. Numerical integration of nearly-Hamiltonian systems. [Van der Pol oscillator and perturbed Keplerian motion

    NASA Technical Reports Server (NTRS)

    Bond, V. R.

    1978-01-01

    The reported investigation is concerned with the solution of systems of differential equations which are derived from a Hamiltonian function in the extended phase space. The problem selected involves a one-dimensional perturbed harmonic oscillator. The van der Pol equation considered has an exact asymptotic value for its amplitude. Comparisons are made between a numerical solution and a known analytical solution. In addition to the van der Pol problem, known solutions regarding the restricted problem of three bodies are used as examples for perturbed Keplerian motion. The extended phase space Hamiltonian discussed by Stiefel and Scheifele (1971) is considered. A description is presented of two canonical formulations of the perturbed harmonic oscillator.

  10. Model coupling methodology for thermo-hydro-mechanical-chemical numerical simulations in integrated assessment of long-term site behaviour

    NASA Astrophysics Data System (ADS)

    Kempka, Thomas; De Lucia, Marco; Kühn, Michael

    2015-04-01

    The integrated assessment of long-term site behaviour taking into account a high spatial resolution at reservoir scale requires a sophisticated methodology to represent coupled thermal, hydraulic, mechanical and chemical processes of relevance. Our coupling methodology considers the time-dependent occurrence and significance of multi-phase flow processes, mechanical effects and geochemical reactions (Kempka et al., 2014). Hereby, a simplified hydro-chemical coupling procedure was developed (Klein et al., 2013) and validated against fully coupled hydro-chemical simulations (De Lucia et al., 2015). The numerical simulation results elaborated for the pilot site Ketzin demonstrate that mechanical reservoir, caprock and fault integrity are maintained during the time of operation and that after 10,000 years CO2 dissolution is the dominating trapping mechanism and mineralization occurs on the order of 10 % to 25 % with negligible changes to porosity and permeability. De Lucia, M., Kempka, T., Kühn, M. A coupling alternative to reactive transport simulations for long-term prediction of chemical reactions in heterogeneous CO2 storage systems (2014) Geosci Model Dev Discuss 7:6217-6261. doi:10.5194/gmdd-7-6217-2014. Kempka, T., De Lucia, M., Kühn, M. Geomechanical integrity verification and mineral trapping quantification for the Ketzin CO2 storage pilot site by coupled numerical simulations (2014) Energy Procedia 63:3330-3338, doi:10.1016/j.egypro.2014.11.361. Klein E, De Lucia M, Kempka T, Kühn M. Evaluation of longterm mineral trapping at the Ketzin pilot site for CO2 storage: an integrative approach using geo-chemical modelling and reservoir simulation. Int J Greenh Gas Con 2013; 19:720-730. doi:10.1016/j.ijggc.2013.05.014.

  11. Numerical estimation of real and apparent integral neutron parameters used in nuclear borehole geophysics.

    PubMed

    Dworak, D; Drabina, A; Woźnicka, U

    2006-07-01

    The semi-empirical method of neutron logging tool calibration developed by Prof. J.A. Czubek uses the real and so-called apparent integral neutron parameters of geological formations. To this end, Czubek proposed a few separated calculation methods commonly based on analytical solutions of the neutron transport problem. A new calculation method for the neutron integral parameters is proposed. Quantities like slowing-down length, diffusion and migration lengths, probability to avoid absorption during slowing down, and thermal neutron absorption cross section can be easily approximated using Monte Carlo simulations. A comparison with the results of the analytical method developed by Czubek has been performed for many cases and the observed differences have been explained.

  12. On the accuracy and convergence of implicit numerical integration of finite element generated ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Soliman, M. O.

    1978-01-01

    A study of accuracy and convergence of linear functional finite element solution to linear parabolic and hyperbolic partial differential equations is presented. A variable-implicit integration procedure is employed for the resultant system of ordinary differential equations. Accuracy and convergence is compared for the consistent and two lumped assembly procedures for the identified initial-value matrix structure. Truncation error estimation is accomplished using Richardson extrapolation.

  13. On the Numerical Solution of the Integral Equation Formulation for Transient Structural Synthesis

    DTIC Science & Technology

    2014-09-01

    history of integral equations dates back to the early nineteenth century when the profound mathematical insights of Newton and Leibniz were being...matrix. As shown in [10], the element stiffness matrix is as follows: 2 2 3 2 2 12 6 12 6 6 4 6 2 12 6 12 6 6 2 6 4 e l l l l l lEI K l ll l l l l

  14. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    PubMed

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-11-11

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.

  15. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems

    PubMed Central

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-01-01

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted “useful” data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247

  16. On the formulation, parameter identification and numerical integration of the EMMI model :plasticity and isotropic damage.

    SciTech Connect

    Bammann, Douglas J.; Johnson, G. C. (University of California, Berkeley, CA); Marin, Esteban B.; Regueiro, Richard A.

    2006-01-01

    In this report we present the formulation of the physically-based Evolving Microstructural Model of Inelasticity (EMMI) . The specific version of the model treated here describes the plasticity and isotropic damage of metals as being currently applied to model the ductile failure process in structural components of the W80 program . The formulation of the EMMI constitutive equations is framed in the context of the large deformation kinematics of solids and the thermodynamics of internal state variables . This formulation is focused first on developing the plasticity equations in both the relaxed (unloaded) and current configurations. The equations in the current configuration, expressed in non-dimensional form, are used to devise the identification procedure for the plasticity parameters. The model is then extended to include a porosity-based isotropic damage state variable to describe the progressive deterioration of the strength and mechanical properties of metals induced by deformation . The numerical treatment of these coupled plasticity-damage constitutive equations is explained in detail. A number of examples are solved to validate the numerical implementation of the model.

  17. golem95: A numerical program to calculate one-loop tensor integrals with up to six external legs

    NASA Astrophysics Data System (ADS)

    Binoth, T.; Guillet, J.-Ph.; Heinrich, G.; Pilon, E.; Reiter, T.

    2009-11-01

    We present a program for the numerical evaluation of form factors entering the calculation of one-loop amplitudes with up to six external legs. The program is written in Fortran95 and performs the reduction to a certain set of basis integrals numerically, using a formalism where inverse Gram determinants can be avoided. It can be used to calculate one-loop amplitudes with massless internal particles in a fast and numerically stable way. Catalogue identifier: AEEO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 50 105 No. of bytes in distributed program, including test data, etc.: 241 657 Distribution format: tar.gz Programming language: Fortran95 Computer: Any computer with a Fortran95 compiler Operating system: Linux, Unix RAM: RAM used per form factor is insignificant, even for a rank six six-point form factor Classification: 4.4, 11.1 External routines: Perl programming language (http://www.perl.com/) Nature of problem: Evaluation of one-loop multi-leg tensor integrals occurring in the calculation of next-to-leading order corrections to scattering amplitudes in elementary particle physics. Solution method: Tensor integrals are represented in terms of form factors and a set of basic building blocks ("basis integrals"). The reduction to the basis integrals is

  18. Integrated Numerical Simulation of Thermo-Hydro-Chemical Phenomena Associated with Geologic Disposal of High-Level Radioactive Waste

    NASA Astrophysics Data System (ADS)

    Park, Sang-Uk; Kim, Jun-Mo; Kihm, Jung-Hwi

    2014-05-01

    A series of numerical simulations was performed using a multiphase thermo-hydro-chemical numerical model to predict integratedly and evaluate quantitatively thermo-hydro-chemical phenomena due to heat generation associated with geologic disposal of high-level radioactive waste. The average mineralogical composition of the fifteen unweathered igneous rock bodies, which were classified as granite, in Republic of Korea was adopted as an initial (primary) mineralogical composition of the host rock of the repository of high-level radioactive waste in the numerical simulations. The numerical simulation results show that temperature rises and thus convective groundwater flow occurs near the repository due to heat generation associated with geologic disposal of high-level radioactive waste. Under these circumstances, a series of water-rock interactions take place. As a result, among the primary minerals, quartz, plagioclase (albite), biotite (annite), and muscovite are dissolved. However, orthoclase is initially precipitated and is then dissolved, whereas microcline is initially dissolved and is then precipitated. On the other hand, the secondary minerals such as kaolinite, Na-smectite, chlorite, and hematite are precipitated and are then partly dissolved. In addition, such dissolution and precipitation of the primary and secondary minerals change groundwater chemistry (quality) and induce reactive chemical transport. As a result, in groundwater, Na+, Fe2+, and HCO3- concentrations initially decrease, whereas K+, AlO2-, and aqueous SiO2 concentrations initially increase. On the other hand, H+ concentration initially increases and thus pH initially decreases due to dissociation of groundwater in order to provide OH-, which is essential in precipitation of Na-smectite and chlorite. Thus, the above-mentioned numerical simulation results suggest that thermo-hydro-chemical numerical simulation can provide a better understanding of heat transport, groundwater flow, and reactive

  19. Elementary Techniques of Numerical Integration and Their Computer Implementation. Applications of Elementary Calculus to Computer Science. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Unit 379.

    ERIC Educational Resources Information Center

    Motter, Wendell L.

    It is noted that there are some integrals which cannot be evaluated by determining an antiderivative, and these integrals must be subjected to other techniques. Numerical integration is one such method; it provides a sum that is an approximate value for some integral types. This module's purpose is to introduce methods of numerical integration and…

  20. Integration of finite element analysis and numerical optimization techniques for RAM transport package design

    SciTech Connect

    Harding, D.C.; Eldred, M.S.; Witkowski, W.R.

    1995-12-31

    Type B radioactive material transport packages must meet strict Nuclear Regulatory Commission (NRC) regulations specified in 10 CFR 71. Type B containers include impact limiters, radiation or thermal shielding layers, and one or more containment vessels. In the past, each component was typically designed separately based on its driving constraint and the expertise of the designer. The components were subsequently assembled and the design modified iteratively until all of the design criteria were met. This approach neglects the fact that components may serve secondary purposes as well as primary ones. For example, an impact limiter`s primary purpose is to act as an energy absorber and protect the contents of the package, but can also act as a heat dissipater or insulator. Designing the component to maximize its performance with respect to both objectives can be accomplished using numerical optimization techniques.

  1. Path dependence of J in three numerical examples. [J integral in three crack propagation problems

    NASA Technical Reports Server (NTRS)

    Karabin, M. E., Jr.; Swedlow, J. L.

    1979-01-01

    Three cracked geometries are studied with the aid of a new finite element model. The procedure employs a variable singularity at the crack tip that tracks changes in the material response during the loading process. Two of the problems are tension-loaded center-crack panels and the other is a three-point bend specimen. Results usually agree with other numerical and analytical analyses, except the finding that J is path dependent as a substantial plastic zone develops. Credible J values are obtained near the crack tip and J shows a significant increase as the radius of J path increases over two orders of magnitude. Incremental and deformation theories are identical provided the stresses exhibit proportionality found in the far field stresses but not near the tip.

  2. Efficient O(N) integration for all-electron electronic structure calculation using numeric basis functions

    SciTech Connect

    Havu, V. Blum, V.; Havu, P.; Scheffler, M.

    2009-12-01

    We consider the problem of developing O(N) scaling grid-based operations needed in many central operations when performing electronic structure calculations with numeric atom-centered orbitals as basis functions. We outline the overall formulation of localized algorithms, and specifically the creation of localized grid batches. The choice of the grid partitioning scheme plays an important role in the performance and memory consumption of the grid-based operations. Three different top-down partitioning methods are investigated, and compared with formally more rigorous yet much more expensive bottom-up algorithms. We show that a conceptually simple top-down grid partitioning scheme achieves essentially the same efficiency as the more rigorous bottom-up approaches.

  3. Long-Time Numerical Integration of the Three-Dimensional Wave Equation in the Vicinity of a Moving Source

    NASA Technical Reports Server (NTRS)

    Ryabenkii, V. S.; Turchaninov, V. I.; Tsynkov, S. V.

    1999-01-01

    We propose a family of algorithms for solving numerically a Cauchy problem for the three-dimensional wave equation. The sources that drive the equation (i.e., the right-hand side) are compactly supported in space for any given time; they, however, may actually move in space with a subsonic speed. The solution is calculated inside a finite domain (e.g., sphere) that also moves with a subsonic speed and always contains the support of the right-hand side. The algorithms employ a standard consistent and stable explicit finite-difference scheme for the wave equation. They allow one to calculate tile solution for arbitrarily long time intervals without error accumulation and with the fixed non-growing amount of tile CPU time and memory required for advancing one time step. The algorithms are inherently three-dimensional; they rely on the presence of lacunae in the solutions of the wave equation in oddly dimensional spaces. The methodology presented in the paper is, in fact, a building block for constructing the nonlocal highly accurate unsteady artificial boundary conditions to be used for the numerical simulation of waves propagating with finite speed over unbounded domains.

  4. Numerical methods for the simulation of complex multi-body flows with applications for the integrated Space Shuttle vehicle

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    1992-01-01

    This project forms part of the long term computational effort to simulate the time dependent flow over the integrated Space Shuttle vehicle (orbiter, solid rocket boosters (SRB's), external tank (ET), and attach hardware) during its ascent mode for various nominal and abort flight conditions. Due to the limitations of experimental data such as wind tunnel wall effects and the difficulty of safely obtaining valid flight data, numerical simulations are undertaken to supplement the existing data base. This data can then be used to predict the aerodynamic behavior over a wide range of flight conditions. Existing computational results show relatively good overall comparison with experiments but further refinement is required to reduce numerical errors and to obtain finer agreements over a larger parameter space. One of the important goals of this project is to obtain better comparisons between numerical simulations and experiments. In the simulations performed so far, the geometry has been simplified in various ways to reduce the complexity so that useful results can be obtained in a reasonable time frame due to limitations in computer resources. In this project, the finer details of the major components of the Space Shuttle are modeled better by including more complexity in the geometry definition. Smaller components not included in early Space Shuttle simulations will now be modeled and gridded.

  5. Mosaic-skeleton method as applied to the numerical solution of three-dimensional Dirichlet problems for the Helmholtz equation in integral form

    NASA Astrophysics Data System (ADS)

    Kashirin, A. A.; Smagin, S. I.; Taltykina, M. Yu.

    2016-04-01

    Interior and exterior three-dimensional Dirichlet problems for the Helmholtz equation are solved numerically. They are formulated as equivalent boundary Fredholm integral equations of the first kind and are approximated by systems of linear algebraic equations, which are then solved numerically by applying an iteration method. The mosaic-skeleton method is used to speed up the solution procedure.

  6. Integrating experimental and numerical methods for a scenario-based quantitative assessment of subsurface energy storage options

    NASA Astrophysics Data System (ADS)

    Kabuth, Alina; Dahmke, Andreas; Hagrey, Said Attia al; Berta, Márton; Dörr, Cordula; Koproch, Nicolas; Köber, Ralf; Köhn, Daniel; Nolde, Michael; Tilmann Pfeiffer, Wolf; Popp, Steffi; Schwanebeck, Malte; Bauer, Sebastian

    2016-04-01

    Within the framework of the transition to renewable energy sources ("Energiewende"), the German government defined the target of producing 60 % of the final energy consumption from renewable energy sources by the year 2050. However, renewable energies are subject to natural fluctuations. Energy storage can help to buffer the resulting time shifts between production and demand. Subsurface geological structures provide large potential capacities for energy stored in the form of heat or gas on daily to seasonal time scales. In order to explore this potential sustainably, the possible induced effects of energy storage operations have to be quantified for both specified normal operation and events of failure. The ANGUS+ project therefore integrates experimental laboratory studies with numerical approaches to assess subsurface energy storage scenarios and monitoring methods. Subsurface storage options for gas, i.e. hydrogen, synthetic methane and compressed air in salt caverns or porous structures, as well as subsurface heat storage are investigated with respect to site prerequisites, storage dimensions, induced effects, monitoring methods and integration into spatial planning schemes. The conceptual interdisciplinary approach of the ANGUS+ project towards the integration of subsurface energy storage into a sustainable subsurface planning scheme is presented here, and this approach is then demonstrated using the examples of two selected energy storage options: Firstly, the option of seasonal heat storage in a shallow aquifer is presented. Coupled thermal and hydraulic processes induced by periodic heat injection and extraction were simulated in the open-source numerical modelling package OpenGeoSys. Situations of specified normal operation as well as cases of failure in operational storage with leaking heat transfer fluid are considered. Bench-scale experiments provided parameterisations of temperature dependent changes in shallow groundwater hydrogeochemistry. As a

  7. Computational and numerical aspects of using the integral equation method for adhesive layer fracture mechanics analysis

    SciTech Connect

    Giurgiutiu, V.; Ionita, A.; Dillard, D.A.; Graffeo, J.K.

    1996-12-31

    Fracture mechanics analysis of adhesively bonded joints has attracted considerable attention in recent years. A possible approach to the analysis of adhesive layer cracks is to study a brittle adhesive between 2 elastic half-planes representing the substrates. A 2-material 3-region elasticity problem is set up and has to be solved. A modeling technique based on the work of Fleck, Hutchinson, and Suo is used. Two complex potential problems using Muskelishvili`s formulation are set up for the 3-region, 2-material model: (a) a distribution of edge dislocations is employed to simulate the crack and its near field; and (b) a crack-free problem is used to simulate the effect of the external loading applied in the far field. Superposition of the two problems is followed by matching tractions and displacements at the bimaterial boundaries. The Cauchy principal value integral is used to treat the singularities. Imposing the traction-free boundary conditions over the entire crack length yielded a linear system of two integral equations. The parameters of the problem are Dundurs` elastic mismatch coefficients, {alpha} and {beta}, and the ratio c/H representing the geometric position of the crack in the adhesive layer.

  8. Numeric solution of the electric field integral equation using Galerkin's method for axisymmetric cases

    NASA Astrophysics Data System (ADS)

    Lileg, Klemens

    1990-12-01

    The electric field integral equation is solved for a cylindrical antenna of arbitrary radius with flat endcaps using the method of moments. Trigonometric subdomain functions are used as basis functions; the weighting functions have the same shape as the basis functions (Galerkin's method). For the endcaps the approximation of the program NEC is used; the excitation is due to a homogeneous field in a gap in the center of the antenna. No analytical approximations are employed in the evaluation of the integrals needed for the computation of the impedance matrix. The admittance so obtained converges better than that found with the help of NEC, but in many cases it is not completely satisfactory. Therefore, the approximate condition for the endcaps are introduced, and trigonometric subdomain functions analogous to those used on the cylinder are used as basis functions. All additional evaluations are done without approximations. The results for the admittance converge in all cases even for a small number of segments. The impedance is measured for a number of monopoles of various radii above a conducting plane; for all frequencies good agreement with the calculation is obtained.

  9. Accurate stress resultants equations for laminated composite deep thick shells

    SciTech Connect

    Qatu, M.S.

    1995-11-01

    This paper derives accurate equations for the normal and shear force as well as bending and twisting moment resultants for laminated composite deep, thick shells. The stress resultant equations for laminated composite thick shells are shown to be different from those of plates. This is due to the fact the stresses over the thickness of the shell have to be integrated on a trapezoidal-like shell element to obtain the stress resultants. Numerical results are obtained and showed that accurate stress resultants are needed for laminated composite deep thick shells, especially if the curvature is not spherical.

  10. Numerical Modeling for Integrated Design of a DNAPL Partitioning Tracer Test

    NASA Astrophysics Data System (ADS)

    McCray, J. E.; Divine, C. E.; Dugan, P. J.; Wolf, L.; Boving, T.; Louth, M.; Brusseau, M. L.; Hayes, D.

    2002-12-01

    Partitioning tracer tests (PTTs) are commonly used to estimate the location and volume of nonaqueous-phase liquids (NAPLs) at contaminated groundwater sites. PTTs are completed before and after remediation efforts as one means to assess remediation effectiveness. PTT design is complex. Numerical models are invaluable tools for designing a PTT, particularly for designing flow rates and selecting tracers to ensure proper tracer breakthrough times, spatial design of injection-extraction wells and rates to maximize tracer capture, well-specific sampling density and frequency, and appropriate tracer-chemical masses. Generally, the design requires consideration of the following factors: type of contaminant; distribution of contaminant at the site, including location of hot spots; site hydraulic characteristics; measurement of the partitioning coefficients for the various tracers; the time allotted to conduct the PTT; evaluation of the magnitude and arrival time of the tracer breakthrough curves; duration of the tracer input pulse; maximum tracer concentrations; analytical detection limits for the tracers; estimation of the capture zone of the well field to tracer ensure mass balance and to limit residual tracer concentrations left in the subsurface; effect of chemical remediation agents on the PTT results, and disposal of the extracted tracer solution. These design principles are applied to a chemical-enhanced remediation effort for a chlorinated-solvent dense NAPL (DNAPL) site at Little Creek Naval Amphibious Base in Virginia Beach, Virginia. For this project, the hydrology and pre-PTT contaminant distribution were characterized using traditional methods (slug tests, groundwater and soil concentrations from monitoring wells, and geoprobe analysis), as well as membrane interface probe analysis. Additional wells were installed after these studies. Partitioning tracers were selected based on the primary DNAPL contaminants at the site, expected NAPL saturations

  11. Study of vortex ring dynamics in the nonlinear Schrodinger equation utilizing GPU-accelerated high-order compact numerical integrators

    NASA Astrophysics Data System (ADS)

    Caplan, Ronald Meyer

    We numerically study the dynamics and interactions of vortex rings in the nonlinear Schrodinger equation (NLSE). Single ring dynamics for both bright and dark vortex rings are explored including their traverse velocity, stability, and perturbations resulting in quadrupole oscillations. Multi-ring dynamics of dark vortex rings are investigated, including scattering and merging of two colliding rings, leapfrogging interactions of co-traveling rings, as well as co-moving steady-state multi-ring ensembles. Simulations of choreographed multi-ring setups are also performed, leading to intriguing interaction dynamics. Due to the inherent lack of a close form solution for vortex rings and the dimensionality where they live, efficient numerical methods to integrate the NLSE have to be developed in order to perform the extensive number of required simulations. To facilitate this, compact high-order numerical schemes for the spatial derivatives are developed which include a new semi-compact modulus-squared Dirichlet boundary condition. The schemes are combined with a fourth-order Runge-Kutta time-stepping scheme in order to keep the overall method fully explicit. To ensure efficient use of the schemes, a stability analysis is performed to find bounds on the largest usable time step-size as a function of the spatial step-size. The numerical methods are implemented into codes which are run on NVIDIA graphic processing unit (GPU) parallel architectures. The codes running on the GPU are shown to be many times faster than their serial counterparts. The codes are developed with future usability in mind, and therefore are written to interface with MATLAB utilizing custom GPU-enabled C codes with a MEX-compiler interface. Reproducibility of results is achieved by combining the codes into a code package called NLSEmagic which is freely distributed on a dedicated website.

  12. A robust and accurate formulation of molecular and colloidal electrostatics.

    PubMed

    Sun, Qiang; Klaseboer, Evert; Chan, Derek Y C

    2016-08-07

    This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics.

  13. A robust and accurate formulation of molecular and colloidal electrostatics

    NASA Astrophysics Data System (ADS)

    Sun, Qiang; Klaseboer, Evert; Chan, Derek Y. C.

    2016-08-01

    This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics.

  14. The lambda-scheme. [for numerical integration of Euler equation of compressible gas flow

    NASA Technical Reports Server (NTRS)

    Moretti, G.

    1979-01-01

    A method for integrating the Euler equations of gas dynamics for compressible flows in any hyperbolic case is presented. This method is applied to the Mach number distribution over a stretch of an infinite duct having a variable cross section, and to the distribution in a channel opening into a vacuum with the Mach number equalling 1.04. An example of the ability of this method to handle two-dimensional unsteady flows is shown using the steady shock-and-isobars pattern reached asymptotically about an ablated blunt body with a free stream Mach number equalling 12. A final example is presented where the technique is applied to a three-dimensional steady supersonic flow, with a Mach number of 2 and an angle of attack of 5 deg.

  15. Radial 32P ion implantation using a coaxial plasma reactor: Activity imaging and numerical integration

    NASA Astrophysics Data System (ADS)

    Fortin, M. A.; Dufresne, V.; Paynter, R.; Sarkissian, A.; Stansfield, B.

    2004-12-01

    Beta-emitting biomedical implants are currently employed in angioplasty, in the treatment of certain types of cancers, and in the embolization of aneurysms with platinum coils. Radioisotopes such as 32P can be implanted using plasma-based ion implantation (PBII). In this article, we describe a reactor that was developed to implant radioisotopes into cylindrical metallic objects. The plasma first ionizes radioisotopes sputtered from a target, and then acts as the source of particles to be implanted into the biased biomedical device. The plasma therefore plays a major role in the ionization/implantation process. Following a sequence of implantation tests, the liners protecting the interior walls of the reactor were changed and the radioactivity on them measured. This study demonstrates that the radioactive deposits on these protective liners, adequately imaged by radiography, can indicate the distribution of the radioisotopes that are not implanted. The resulting maps give unique information about the activity distribution, which is influenced by the sputtering of the 32P-containing fragments, their ionization in the plasma, and also by the subsequent ion transport mechanisms. Such information can be interpreted and used to significantly improve the efficiency of the implantation procedure. Using a surface barrier detector, a comparative study established a relationship between the gray scale of radiographs of the liners, and activity measurements. An integration process allows the quantification of the activities on the walls and components of the reactor. Finally, the resulting integral of the 32P activity is correlated to the sum of the radioactivity amounts that were sputtered from radioactive targets inside the implanter before the dismantling procedure. This balance addresses the issue of security regarding PBII technology and confirms the confinement of the radioactivity inside the chamber.

  16. Integrated numerical design of an innovative Lower Hybrid launcher for Alcator C-Mod

    SciTech Connect

    Meneghini, O.; Shiraiwa, S.; Beck, W.; Irby, J.; Koert, P.; Parker, R. R.; Viera, R.; Wukitch, S.; Wilson, J.

    2009-11-26

    The new Alcator C-Mod LHCD system (LH2) is based on the concept of a four way splitter [1] which evenly splits the RF power among the four waveguides that compose one of the 16 columns of the LH grill. In this work several simulation tools have been used to study the LH2 coupling performance and the launched spectra when facing a plasma, numerically verifying the effectiveness of the four way splitter concept and further improving its design. The TOPLHA code has been used for modeling reflections at the antenna/plasma interface. TOPLHA results have been then coupled to the commercial code CST Microwave Studio to efficiently optimize the four way splitter geometry for several plasma scenarios. Subsequently, the COMSOL Multiphysics code has been used to self consistently take into account the electromagnetic-thermal-structural interactions. This comprehensive and predictive analysis has proven to be very valuable for understanding the behavior of the system when facing the plasma and has profoundly influenced several design choices of the LH2. According to the simulations, the final design ensures even poloidal power splitting for a wide range of plasma parameters, which ultimately results in an improvement of the wave coupling and an increased maximum operating power.

  17. Integrating Field Measurements and Numerical Modeling to Investigate Gully Network Evolution

    NASA Astrophysics Data System (ADS)

    Rengers, F. K.; Tucker, G. E.

    2011-12-01

    With the advent of numerical modeling the exploration of landscape evolution has advanced from simple thought experiments to investigation of increasingly complex landforming processes. A common criticism of landscape evolution modeling, however, is the lack of model validation with actual field data. Here we present research that continues the advancement of landscape evolution theory by combining detailed field observations with numerical modeling. The focus of our investigation is gully networks on soft-rock strata, where rates of morphologic change are fast enough to measure on annual to decadal time scales. Our research focuses on a highly transient landscape on the high plains of eastern Colorado (40 miles east of Denver, CO) where convective thunderstorms drive ephemeral stream flow, resulting in incised gullies with vertical knickpoints. The site has yielded a comprehensive dataset of hydrology, topography, and geomorphic change. We are continuously monitoring several environmental parameters (including rainfall, overland flow, stream discharge, and soil moisture), and have explored the physical properties of the soil on the site through grain size analysis and infiltration measurements. In addition, time-lapse photography and repeat terrestrial lidar scanning make it possible to track knickpoint dynamics through time. The resulting dataset provides a case study for testing the ability of landscape evolution models to reproduce annual to decadal patterns of erosion and deposition. Knickpoint erosion is the largest contributor to landscape evolution and the controlling factor for gully migration rate. Average knickpoint retreat rates, based on historic aerial photographs and ongoing laser surveying, range between 0.1 and 2.5 m/yr. Knickpoint retreat appears to be driven by a combination of plunge-pool scour, large block failure, and grain-by-grain entrainment of sediment from the wall. Erosion is correlated with flash floods in the summer months. To test our

  18. Integrating Geochemical and Geodynamic Numerical Models of Mantle Evolution and Plate Tectonics

    NASA Astrophysics Data System (ADS)

    Tackley, P. J.; Xie, S.

    2001-12-01

    The thermal and chemical evolution of Earth's mantle and plates are inextricably coupled by the plate tectonic - mantle convective system. Convection causes chemical differentiation, recycling and mixing, while chemical variations affect the convection through physical properties such as density and viscosity which depend on composition. It is now possible to construct numerical mantle convection models that track the thermo-chemical evolution of major and minor elements, and which can be used to test prospective models and hypotheses regarding Earth's chemical and thermal evolution. Model thermal and chemical structures can be compared to results from seismic tomography, while geochemical signatures (e.g., trace element ratios) can be compared to geochemical observations. The presented, two-dimensional model combines a simplified 2-component major element model with tracking of the most important trace elements, using a tracer method. Melting is self-consistently treated using a solidus, with melt placed on the surface as crust. Partitioning of trace elements occurs between melt and residue. Decaying heat-producing elements and secular cooling of the mantle and core provide the driving heat sources. Pseudo-plastic yielding of the lithosphere gives a first-order approximation of plate tectonics, and also allows planets with a rigid lid or intermittent plate tectonics to be modeled simply by increasing the yield strength. Preliminary models with an initially homogeneous mantle show that regions with a HIMU-like signature can be generated by crustal recycling, and regions with high 3He/4He ratios can be generated by residuum recycling. Outgassing of Argon is within the observed range. Models with initially layered mantles will also be investigated. In future it will be important to include a more realistic bulk compositional model that allows continental crust as well as oceanic crust to form, and to extend the model to three dimensions since toroidal flow may alter

  19. Accurate orbit propagation with planetary close encounters

    NASA Astrophysics Data System (ADS)

    Baù, Giulio; Milani Comparetti, Andrea; Guerra, Francesca

    2015-08-01

    We tackle the problem of accurately propagating the motion of those small bodies that undergo close approaches with a planet. The literature is lacking on this topic and the reliability of the numerical results is not sufficiently discussed. The high-frequency components of the perturbation generated by a close encounter makes the propagation particularly challenging both from the point of view of the dynamical stability of the formulation and the numerical stability of the integrator. In our approach a fixed step-size and order multistep integrator is combined with a regularized formulation of the perturbed two-body problem. When the propagated object enters the region of influence of a celestial body, the latter becomes the new primary body of attraction. Moreover, the formulation and the step-size will also be changed if necessary. We present: 1) the restarter procedure applied to the multistep integrator whenever the primary body is changed; 2) new analytical formulae for setting the step-size (given the order of the multistep, formulation and initial osculating orbit) in order to control the accumulation of the local truncation error and guarantee the numerical stability during the propagation; 3) a new definition of the region of influence in the phase space. We test the propagator with some real asteroids subject to the gravitational attraction of the planets, the Yarkovsky and relativistic perturbations. Our goal is to show that the proposed approach improves the performance of both the propagator implemented in the OrbFit software package (which is currently used by the NEODyS service) and of the propagator represented by a variable step-size and order multistep method combined with Cowell's formulation (i.e. direct integration of position and velocity in either the physical or a fictitious time).

  20. Numerical investigations of free edge effects in integrally stiffened layered composite panels

    NASA Astrophysics Data System (ADS)

    Skrna-Jakl, I.; Rammerstorfer, F. G.

    A linear finite element analysis is conducted to examine the free edge stresses and the displacement behavior of an integrally stiffened layered composite panel loaded under uniform inplane tension. Symmetric (+Phi, -Phi, 0, -Phi, +Phi) graphite-epoxy laminates with various fiber orientations in the off-axis plies are considered. The quadratic stress criterion, the Tsai-Wu criterion and the Mises equivalent stresses are used to determine a risk parameter for onset of delamination, first ply failure and matrix cracking in the neat resin. The results of the analysis show that the interlaminar stresses at the +Phi/-Phi and -Phi/0 interfaces increase rapidly in the skin-stringer transition. This behavior is observed at the free edge as well as at some distance from it. The magnitude of the interlaminar stresses in the skin-stringer transition is strongly influenced by the fiber orientations of the off-axis plies. In addition, the overall displacements depend on the magnitude of the off-axis ply angle. It is found that for Phi less than 30 deg the deformations of the stiffener section are dominated by bending, whereas for Phi in the range of 45 to 75 deg the deformations are dominated by torsion. The failure analysis shows that ply and matrix failure tend to occur prior to delamination for the considered configurations.

  1. Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.; Braithwaite, David W.

    2016-01-01

    In this review, we attempt to integrate two crucial aspects of numerical development: learning the magnitudes of individual numbers and learning arithmetic. Numerical magnitude development involves gaining increasingly precise knowledge of increasing ranges and types of numbers: from non-symbolic to small symbolic numbers, from smaller to larger…

  2. Magnitude Knowledge: The Common Core of Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.

    2016-01-01

    The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: (1) representing increasingly precisely the magnitudes of non-symbolic…

  3. Magnitude Knowledge: The Common Core of Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.

    2016-01-01

    The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: 1) representing increasingly precisely the magnitudes of non-symbolic…

  4. Numerical and Experimental Investigation of Natural Convection in Open-Ended Channels with Application to Building Integrated Photovoltaic (BIPV) Systems

    NASA Astrophysics Data System (ADS)

    Timchenko, V.; Tkachenko, O. A.; Giroux-Julien, S.; Ménézo, C.

    2015-05-01

    Numerical and experimental investigations of the flow and heat transfer in open-ended channel formed by the double skin façade have been undertaken in order to improve understanding of the phenomena and to apply it to passive cooling of building integrated photovoltaic systems. Both uniform heating and non-uniform heating configurations in which heat sources alternated with unheated zones on both skins were studied. Different periodic and asymmetric heating modes have been considered for the same aspect ratio 1/15 of wall distance to wall height and for periodicity 1/15 and 4/15 of heated/unheated zones and heat input, 220 W/m2. In computational study three dimensional transient LES simulation was carried out. It is shown that in comparison to uniformly heating configuration, non-uniformly heating configuration enhances both convective heat transfer and chimney effect.

  5. An integrated numerical framework for water quality modelling in cold-region rivers: A case of the lower Athabasca River.

    PubMed

    Shakibaeinia, Ahmad; Kashyap, Shalini; Dibike, Yonas B; Prowse, Terry D

    2016-11-01

    There is a great deal of interest to determine the state and variations of water quality parameters in the lower Athabasca River (LAR) ecosystem, northern Alberta, Canada, due to industrial developments in the region. As a cold region river, the annual cycle of ice cover formation and breakup play a key role in water quality transformation and transportation processes. An integrated deterministic numerical modelling framework is developed and applied for long-term and detailed simulation of the state and variation (spatial and temporal) of major water quality constituents both in open-water and ice covered conditions in the lower Athabasca River (LAR). The framework is based on the a 1D and a 2D hydrodynamic and water quality models externally coupled with the 1D river ice process models to account for the cold season effects. The models are calibrated/validated using available measured data and applied for simulation of dissolved oxygen (DO) and nutrients (i.e., nitrogen and phosphorus). The results show the effect of winter ice cover on reducing the DO concentration, and a fluctuating temporal trend for DO and nutrients during summer periods with substantial differences in concentration between the main channel and flood plains. This numerical frame work can be the basis for future water quality scenario-based studies in the LAR.

  6. Iterative method for the numerical solution of a system of integral equations for the heat conduction initial boundary value problem

    NASA Astrophysics Data System (ADS)

    Svetushkov, N. N.

    2016-11-01

    The paper deals with a numerical algorithm to reduce the overall system of integral equations describing the heat transfer process at any geometrically complex area (both twodimensional and three-dimensional), to the iterative solution of a system of independent onedimensional integral equations. This approach has been called "string method" and has been used to solve a number of applications, including the problem of the detonation wave front for the calculation of heat loads in pulse detonation engines. In this approach "the strings" are a set of limited segments parallel to the coordinate axes, into which the whole solving area is divided (similar to the way the strings are arranged in a tennis racket). Unlike other grid methods where often for finding solutions, the values of the desired function in the region located around a specific central point here in each iteration step is determined by the solution throughout the length of the one-dimensional "string", which connects the two end points and set them values and determine the temperature distribution along all the strings in the first step of an iterative procedure.

  7. Beyond transition state theory: accurate description of nuclear quantum effects on the rate and equilibrium constants of chemical reactions using Feynman path integrals.

    PubMed

    Vanícek, Jirí

    2011-01-01

    Nuclear tunneling and other nuclear quantum effects have been shown to play a significant role in molecules as large as enzymes even at physiological temperatures. I discuss how these quantum phenomena can be accounted for rigorously using Feynman path integrals in calculations of the equilibrium and kinetic isotope effects as well as of the temperature dependence of the rate constant. Because these calculations are extremely computationally demanding, special attention is devoted to increasing the computational efficiency by orders of magnitude by employing efficient path integral estimators.

  8. FeynDyn: A MATLAB program for fast numerical Feynman integral calculations for open quantum system dynamics on GPUs

    NASA Astrophysics Data System (ADS)

    Dattani, Nikesh S.

    2013-12-01

    language: MATLAB R2012a. Computer: See “Operating system”. Operating system: Any operating system that can run MATLAB R2007a or above. Classification: 4.4. Nature of problem: Calculating the dynamics of the reduced density operator of an open quantum system. Solution method: Numerical Feynman integral. Running time: Depends on the input parameters. See the main text for examples.

  9. Hydro-geophysical observations integration in numerical model: case study in Mediterranean karstic unsaturated zone (Larzac, france)

    NASA Astrophysics Data System (ADS)

    Champollion, Cédric; Fores, Benjamin; Le Moigne, Nicolas; Chéry, Jean

    2016-04-01

    Karstic hydro-systems are highly non-linear and heterogeneous but one of the main water resource in the Mediterranean area. Neither local measurements in boreholes or analysis at the spring can take into account the variability of the water storage. Since a few years, ground-based geophysical measurements (such as gravity, electrical resistivity or seismological data) allows following water storage in heterogeneous hydrosystems at an intermediate scale between boreholes and basin. Behind classical rigorous monitoring, the integration of geophysical data in hydrological numerical models in needed for both processes interpretation and quantification. Since a few years, a karstic geophysical observatory (GEK: Géodésie de l'Environnement Karstique, OSU OREME, SNO H+) has been setup in the Mediterranean area in the south of France. The observatory is surrounding more than 250m karstified dolomite, with an unsaturated zone of ~150m thickness. At the observatory water level in boreholes, evapotranspiration and rainfall are classical hydro-meteorological observations completed by continuous gravity, resistivity and seismological measurements. The main objective of the study is the modelling of the whole observation dataset by explicit unsaturated numerical model in one dimension. Hydrus software is used for the explicit modelling of the water storage and transfer and links the different observations (geophysics, water level, evapotranspiration) with the water saturation. Unknown hydrological parameters (permeability, porosity) are retrieved from stochastic inversions. The scale of investigation of the different observations are discussed thank to the modelling results. A sensibility study of the measurements against the model is done and key hydro-geological processes of the site are presented.

  10. Integrating a Numerical Taxonomic Method and Molecular Phylogeny for Species Delimitation of Melampsora Species (Melampsoraceae, Pucciniales) on Willows in China.

    PubMed

    Zhao, Peng; Wang, Qing-Hong; Tian, Cheng-Ming; Kakishima, Makoto

    2015-01-01

    The species in genus Melampsora are the causal agents of leaf rust diseases on willows in natural habitats and plantations. However, the classification and recognition of species diversity are challenging because morphological characteristics are scant and morphological variation in Melampsora on willows has not been thoroughly evaluated. Thus, the taxonomy of Melampsora species on willows remains confused, especially in China where 31 species were reported based on either European or Japanese taxonomic systems. To clarify the species boundaries of Melampsora species on willows in China, we tested two approaches for species delimitation inferred from morphological and molecular variations. Morphological species boundaries were determined based on numerical taxonomic analyses of morphological characteristics in the uredinial and telial stages by cluster analysis and one-way analysis of variance. Phylogenetic species boundaries were delineated based on the generalized mixed Yule-coalescent (GMYC) model analysis of the sequences of the internal transcribed spacer (ITS1 and ITS2) regions including the 5.8S and D1/D2 regions of the large nuclear subunit of the ribosomal RNA gene. Numerical taxonomic analyses of 14 morphological characteristics recognized in the uredinial-telial stages revealed 22 morphological species, whereas the GMYC results recovered 29 phylogenetic species. In total, 17 morphological species were in concordance with the phylogenetic species and 5 morphological species were in concordance with 12 phylogenetic species. Both the morphological and molecular data supported 14 morphological characteristics, including 5 newly recognized characteristics and 9 traditionally emphasized characteristics, as effective for the differentiation of Melampsora species on willows in China. Based on the concordance and discordance of the two species delimitation approaches, we concluded that integrative taxonomy by using both morphological and molecular variations was

  11. Integrating a Numerical Taxonomic Method and Molecular Phylogeny for Species Delimitation of Melampsora Species (Melampsoraceae, Pucciniales) on Willows in China

    PubMed Central

    Zhao, Peng; Wang, Qing-Hong; Tian, Cheng-Ming; Kakishima, Makoto

    2015-01-01

    The species in genus Melampsora are the causal agents of leaf rust diseases on willows in natural habitats and plantations. However, the classification and recognition of species diversity are challenging because morphological characteristics are scant and morphological variation in Melampsora on willows has not been thoroughly evaluated. Thus, the taxonomy of Melampsora species on willows remains confused, especially in China where 31 species were reported based on either European or Japanese taxonomic systems. To clarify the species boundaries of Melampsora species on willows in China, we tested two approaches for species delimitation inferred from morphological and molecular variations. Morphological species boundaries were determined based on numerical taxonomic analyses of morphological characteristics in the uredinial and telial stages by cluster analysis and one-way analysis of variance. Phylogenetic species boundaries were delineated based on the generalized mixed Yule-coalescent (GMYC) model analysis of the sequences of the internal transcribed spacer (ITS1 and ITS2) regions including the 5.8S and D1/D2 regions of the large nuclear subunit of the ribosomal RNA gene. Numerical taxonomic analyses of 14 morphological characteristics recognized in the uredinial-telial stages revealed 22 morphological species, whereas the GMYC results recovered 29 phylogenetic species. In total, 17 morphological species were in concordance with the phylogenetic species and 5 morphological species were in concordance with 12 phylogenetic species. Both the morphological and molecular data supported 14 morphological characteristics, including 5 newly recognized characteristics and 9 traditionally emphasized characteristics, as effective for the differentiation of Melampsora species on willows in China. Based on the concordance and discordance of the two species delimitation approaches, we concluded that integrative taxonomy by using both morphological and molecular variations was

  12. Fully Coriolis-coupled quantum studies of the H + O2 (upsilon i = 0-2, j i = 0,1) --> OH + O reaction on an accurate potential energy surface: integral cross sections and rate constants.

    PubMed

    Lin, Shi Ying; Sun, Zhigang; Guo, Hua; Zhang, Dong Hui; Honvault, Pascal; Xie, Daiqian; Lee, Soo-Y

    2008-01-31

    We present accurate quantum calculations of the integral cross section and rate constant for the H + O2 --> OH + O combustion reaction on a recently developed ab initio potential energy surface using parallelized time-dependent and Chebyshev wavepacket methods. Partial wave contributions up to J = 70 were computed with full Coriolis coupling, which enabled us to obtain the initial state-specified integral cross sections up to 2.0 eV of the collision energy and thermal rate constants up to 3000 K. The integral cross sections show a large reaction threshold due to the quantum endothermicity of the reaction, and they monotonically increase with the collision energy. As a result, the temperature dependence of the rate constant is of the Arrhenius type. In addition, it was found that reactivity is enhanced by reactant vibrational excitation. The calculated thermal rate constant shows a significant improvement over that obtained on the DMBE IV potential, but it still underestimates the experimental consensus.

  13. Accurate determination of pyridine-poly(amidoamine) dendrimer absolute binding constants with the OPLS-AA force field and direct integration of radial distribution functions.

    PubMed

    Peng, Yong; Kaminski, George A

    2005-08-11

    OPLS-AA force field and direct integration of intermolecular radial distribution functions (RDF) were employed to calculate absolute binding constants of pyridine molecules to amino group (NH2) and amide group hydrogen atoms in and first generation poly(amidoamine) dendrimers in chloroform. The average errors in the absolute and relative association constants, as predicted with the calculations, are 14.1% and 10.8%, respectively, which translate into ca. 0.08 and 0.06 kcal/mol errors in the absolute and relative binding free energies. We believe that this level of accuracy proves the applicability of the OPLS-AA, force field, in combination with the direct RDF integration, to reproducing and predicting absolute intermolecular association constants of low magnitudes (ca. 0.2-2.0 range).

  14. Accurate Determination of Pyridine -- Poly (Amidoamine) Dendrimer Absolute Binding Constants with the OPLS-AA Force Field and Direct Integration of Radial Distribution Functions

    NASA Astrophysics Data System (ADS)

    Peng, Yong; Kaminski, George

    2006-03-01

    OPLS-AA force field and direct integration of intermolecular radial distribution functions (RDF) were employed to calculate absolute binding constants of pyridine molecules to NH2 and amide group hydrogen atoms in 0th and 1st generation poly (amidoamine) dendrimers in chloroform. The average errors in the absolute and relative association constants, as predicted with the calculations, are 14.1% and 10.8%, respectively, which translate into ca. 0.08 kcal/mol and 0.06 kcal/mol errors in the absolute and relative binding free energies. We believe that this level of accuracy proves the applicability of the OPLS-AA, force field, in combination with the direct RDF integration, to reproducing and predicting absolute intermolecular association constants of low magnitudes (ca. 0.2 -- 2.0 range).

  15. An Integrated Tool to Study MHC Region: Accurate SNV Detection and HLA Genes Typing in Human MHC Region Using Targeted High-Throughput Sequencing

    PubMed Central

    Liu, Xiao; Xu, Yinyin; Liang, Dequan; Gao, Peng; Sun, Yepeng; Gifford, Benjamin; D’Ascenzo, Mark; Liu, Xiaomin; Tellier, Laurent C. A. M.; Yang, Fang; Tong, Xin; Chen, Dan; Zheng, Jing; Li, Weiyang; Richmond, Todd; Xu, Xun; Wang, Jun; Li, Yingrui

    2013-01-01

    The major histocompatibility complex (MHC) is one of the most variable and gene-dense regions of the human genome. Most studies of the MHC, and associated regions, focus on minor variants and HLA typing, many of which have been demonstrated to be associated with human disease susceptibility and metabolic pathways. However, the detection of variants in the MHC region, and diagnostic HLA typing, still lacks a coherent, standardized, cost effective and high coverage protocol of clinical quality and reliability. In this paper, we presented such a method for the accurate detection of minor variants and HLA types in the human MHC region, using high-throughput, high-coverage sequencing of target regions. A probe set was designed to template upon the 8 annotated human MHC haplotypes, and to encompass the 5 megabases (Mb) of the extended MHC region. We deployed our probes upon three, genetically diverse human samples for probe set evaluation, and sequencing data show that ∼97% of the MHC region, and over 99% of the genes in MHC region, are covered with sufficient depth and good evenness. 98% of genotypes called by this capture sequencing prove consistent with established HapMap genotypes. We have concurrently developed a one-step pipeline for calling any HLA type referenced in the IMGT/HLA database from this target capture sequencing data, which shows over 96% typing accuracy when deployed at 4 digital resolution. This cost-effective and highly accurate approach for variant detection and HLA typing in the MHC region may lend further insight into immune-mediated diseases studies, and may find clinical utility in transplantation medicine research. This one-step pipeline is released for general evaluation and use by the scientific community. PMID:23894464

  16. Integrated analysis of numerous heterogeneous gene expression profiles for detecting robust disease-specific biomarkers and proposing drug targets.

    PubMed

    Amar, David; Hait, Tom; Izraeli, Shai; Shamir, Ron

    2015-09-18

    Genome-wide expression profiling has revolutionized biomedical research; vast amounts of expression data from numerous studies of many diseases are now available. Making the best use of this resource in order to better understand disease processes and treatment remains an open challenge. In particular, disease biomarkers detected in case-control studies suffer from low reliability and are only weakly reproducible. Here, we present a systematic integrative analysis methodology to overcome these shortcomings. We assembled and manually curated more than 14,000 expression profiles spanning 48 diseases and 18 expression platforms. We show that when studying a particular disease, judicious utilization of profiles from other diseases and information on disease hierarchy improves classification quality, avoids overoptimistic evaluation of that quality, and enhances disease-specific biomarker discovery. This approach yielded specific biomarkers for 24 of the analyzed diseases. We demonstrate how to combine these biomarkers with large-scale interaction, mutation and drug target data, forming a highly valuable disease summary that suggests novel directions in disease understanding and drug repurposing. Our analysis also estimates the number of samples required to reach a desired level of biomarker stability. This methodology can greatly improve the exploitation of the mountain of expression profiles for better disease analysis.

  17. Numerical Estimation of the Pseudo-Jahn-Teller Effect Using Nonadiabatic Coupling Integrals in Monocyclic and Bicyclic Conjugated Molecules.

    PubMed

    Koseki, Shiro; Toyota, Azumao; Muramatsu, Takashi; Asada, Toshio; Matsunaga, Nikita

    2016-12-29

    The pseudo-Jahn-Teller (pJT) effect in monocyclic and bicyclic conjugated molecules was investigated by using the state-averaged multiconfiguration self-consistent field (MCSCF) method, together with the 6-31G(d,p) basis sets. Following the perturbation theory, the force constant along a normal mode Q is given by the sum of the classical force constant and the vibronic contribution (VC) resulting from the interaction of the ground state with excited states. The latter is given as the sum of individual contributions arising from vibronic interactions between the ground state and excited states. In the present work, each VC was calculated on the basis of nonadiabatic coupling (NAC) integrals. Furthermore, the classical force constant was estimated by taking advantage of the VC and the force constant obtained by vibrational analyses. For pentalene and heptalene, the present method seems to overestimate the VC in absolute value because of the small energy gap between the ground state and the lowest excited state. However, we are confident that the VC and the classical force constant for the other molecules are reasonable in magnitude in comparison with available literature information. Thus, it is proved that the present method is applicable and useful for numerical estimation of pJT effect.

  18. Integrating trans-abdominal ultrasonography with fecal steroid metabolite monitoring to accurately diagnose pregnancy and predict the timing of parturition in the red panda (Ailurus fulgens styani).

    PubMed

    Curry, Erin; Browning, Lissa J; Reinhart, Paul; Roth, Terri L

    2017-02-23

    Red pandas (Ailurus fulgens styani) exhibit a variable gestation length and may experience a pseudopregnancy indistinguishable from true pregnancy; therefore, it is not possible to deduce an individual's true pregnancy status and parturition date based on breeding dates or fecal progesterone excretion patterns alone. The goal of this study was to evaluate the use of transabdominal ultrasonography for pregnancy diagnosis in red pandas. Two to three females were monitored over 4 consecutive years, generating a total of seven profiles (four pregnancies, two pseudopregnancies, and one lost pregnancy). Fecal samples were collected and assayed for progesterone (P4) and estrogen conjugate (EC) to characterize patterns associated with breeding activity and parturition events. Animals were trained for voluntary transabdominal ultrasound and examinations were performed weekly. Breeding behaviors and fecal EC data suggest that the estrus cycle of this species is 11-12 days in length. Fecal steroid metabolite analyses also revealed that neither P4 nor EC concentrations were suitable indicators of pregnancy in this species; however, a secondary increase in P4 occurred 69-71 days prior to parturition in all pregnant females, presumably coinciding with embryo implantation. Using ultrasonography, embryos were detected as early as 62 days post-breeding/50 days pre-partum and serial measurements of uterine lumen diameter were documented throughout four pregnancies. Advances in reproductive diagnostics, such as the implementation of ultrasonography, may facilitate improved husbandry of pregnant females and allow for the accurate prediction of parturition.

  19. Accurate spectral color measurements

    NASA Astrophysics Data System (ADS)

    Hiltunen, Jouni; Jaeaeskelaeinen, Timo; Parkkinen, Jussi P. S.

    1999-08-01

    Surface color measurement is of importance in a very wide range of industrial applications including paint, paper, printing, photography, textiles, plastics and so on. For a demanding color measurements spectral approach is often needed. One can measure a color spectrum with a spectrophotometer using calibrated standard samples as a reference. Because it is impossible to define absolute color values of a sample, we always work with approximations. The human eye can perceive color difference as small as 0.5 CIELAB units and thus distinguish millions of colors. This 0.5 unit difference should be a goal for the precise color measurements. This limit is not a problem if we only want to measure the color difference of two samples, but if we want to know in a same time exact color coordinate values accuracy problems arise. The values of two instruments can be astonishingly different. The accuracy of the instrument used in color measurement may depend on various errors such as photometric non-linearity, wavelength error, integrating sphere dark level error, integrating sphere error in both specular included and specular excluded modes. Thus the correction formulas should be used to get more accurate results. Another question is how many channels i.e. wavelengths we are using to measure a spectrum. It is obvious that the sampling interval should be short to get more precise results. Furthermore, the result we get is always compromise of measuring time, conditions and cost. Sometimes we have to use portable syste or the shape and the size of samples makes it impossible to use sensitive equipment. In this study a small set of calibrated color tiles measured with the Perkin Elmer Lamda 18 and the Minolta CM-2002 spectrophotometers are compared. In the paper we explain the typical error sources of spectral color measurements, and show which are the accuracy demands a good colorimeter should have.

  20. 3D Numerical Optimization Modelling of Ivancich landslides (Assisi, Italy) via integration of remote sensing and in situ observations.

    NASA Astrophysics Data System (ADS)

    Castaldo, Raffaele; De Novellis, Vincenzo; Lollino, Piernicola; Manunta, Michele; Tizzani, Pietro

    2015-04-01

    The new challenge that the research in slopes instabilities phenomena is going to tackle is the effective integration and joint exploitation of remote sensing measurements with in situ data and observations to study and understand the sub-surface interactions, the triggering causes, and, in general, the long term behaviour of the investigated landslide phenomenon. In this context, a very promising approach is represented by Finite Element (FE) techniques, which allow us to consider the intrinsic complexity of the mass movement phenomena and to effectively benefit from multi source observations and data. In this context, we perform a three dimensional (3D) numerical model of the Ivancich (Assisi, Central Italy) instability phenomenon. In particular, we apply an inverse FE method based on a Genetic Algorithm optimization procedure, benefitting from advanced DInSAR measurements, retrieved through the full resolution Small Baseline Subset (SBAS) technique, and an inclinometric array distribution. To this purpose we consider the SAR images acquired from descending orbit by the COSMO-SkyMed (CSK) X-band radar constellation, from December 2009 to February 2012. Moreover the optimization input dataset is completed by an array of eleven inclinometer measurements, from 1999 to 2006, distributed along the unstable mass. The landslide body is formed of debris material sliding on a arenaceous marl substratum, with a thin shear band detected using borehole and inclinometric data, at depth ranging from 20 to 60 m. Specifically, we consider the active role of this shear band in the control of the landslide evolution process. A large field monitoring dataset of the landslide process, including at-depth piezometric and geological borehole observations, were available. The integration of these datasets allows us to develop a 3D structural geological model of the considered slope. To investigate the dynamic evolution of a landslide, various physical approaches can be considered

  1. Accurate derivative evaluation for any Grad–Shafranov solver

    SciTech Connect

    Ricketson, L.F.; Cerfon, A.J.; Rachh, M.; Freidberg, J.P.

    2016-01-15

    We present a numerical scheme that can be combined with any fixed boundary finite element based Poisson or Grad–Shafranov solver to compute the first and second partial derivatives of the solution to these equations with the same order of convergence as the solution itself. At the heart of our scheme is an efficient and accurate computation of the Dirichlet to Neumann map through the evaluation of a singular volume integral and the solution to a Fredholm integral equation of the second kind. Our numerical method is particularly useful for magnetic confinement fusion simulations, since it allows the evaluation of quantities such as the magnetic field, the parallel current density and the magnetic curvature with much higher accuracy than has been previously feasible on the affordable coarse grids that are usually implemented.

  2. Simple numerical evaluation of modified Bessel functions Kν( x) of fractional order and the integral ʃ x∞K ν(η) dη

    NASA Astrophysics Data System (ADS)

    Kostroun, Vaclav O.

    1980-05-01

    Theoretical expressions for the angular and spectral distributions of synchrotron radiation involve modified Bessel functions of fractional order and the integral ʃ x∞K ν(η) dη . A simple series expressions for these quantities which can be evaluated numerically with hand-held programmable calculators is presented.

  3. Numerical modeling of the 3D dynamics of ultrasound contrast agent microbubbles using the boundary integral method

    NASA Astrophysics Data System (ADS)

    Wang, Qianxi; Manmi, Kawa; Calvisi, Michael L.

    2015-02-01

    Ultrasound contrast agents (UCAs) are microbubbles stabilized with a shell typically of lipid, polymer, or protein and are emerging as a unique tool for noninvasive therapies ranging from gene delivery to tumor ablation. While various models have been developed to describe the spherical oscillations of contrast agents, the treatment of nonspherical behavior has received less attention. However, the nonspherical dynamics of contrast agents are thought to play an important role in therapeutic applications, for example, enhancing the uptake of therapeutic agents across cell membranes and tissue interfaces, and causing tissue ablation. In this paper, a model for nonspherical contrast agent dynamics based on the boundary integral method is described. The effects of the encapsulating shell are approximated by adapting Hoff's model for thin-shell, spherical contrast agents. A high-quality mesh of the bubble surface is maintained by implementing a hybrid approach of the Lagrangian method and elastic mesh technique. The numerical model agrees well with a modified Rayleigh-Plesset equation for encapsulated spherical bubbles. Numerical analyses of the dynamics of UCAs in an infinite liquid and near a rigid wall are performed in parameter regimes of clinical relevance. The oscillation amplitude and period decrease significantly due to the coating. A bubble jet forms when the amplitude of ultrasound is sufficiently large, as occurs for bubbles without a coating; however, the threshold amplitude required to incite jetting increases due to the coating. When a UCA is near a rigid boundary subject to acoustic forcing, the jet is directed towards the wall if the acoustic wave propagates perpendicular to the boundary. When the acoustic wave propagates parallel to the rigid boundary, the jet direction has components both along the wave direction and towards the boundary that depend mainly on the dimensionless standoff distance of the bubble from the boundary. In all cases, the jet

  4. Numerical analysis of wellbore integrity: results from a field study of a natural CO2 reservoir production well

    NASA Astrophysics Data System (ADS)

    Crow, W.; Gasda, S. E.; Williams, D. B.; Celia, M. A.; Carey, J. W.

    2008-12-01

    An important aspect of the risk associated with geological CO2 sequestration is the integrity of existing wellbores that penetrate geological layers targeted for CO2 injection. CO2 leakage may occur through multiple pathways along a wellbore, including through micro-fractures and micro-annuli within the "disturbed zone" surrounding the well casing. The effective permeability of this zone is a key parameter of wellbore integrity required for validation of numerical models. This parameter depends on a number of complex factors, including long-term attack by aggressive fluids, poor well completion and actions related to production of fluids through the wellbore. Recent studies have sought to replicate downhole conditions in the laboratory to identify the mechanisms and rates at which cement deterioration occurs. However, field tests are essential to understanding the in situ leakage properties of the millions of wells that exist in the mature sedimentary basins in North America. In this study, we present results from a field study of a 30-year-old production well from a natural CO2 reservoir. The wellbore was potentially exposed to a 96% CO2 fluid from the time of cement placement, and therefore cement degradation may be a significant factor leading to leakage pathways along this wellbore. A series of downhole tests was performed, including bond logs and extraction of sidewall cores. The cores were analyzed in the laboratory for mineralogical and hydrologic properties. A pressure test was conducted over an 11-ft section of well to determine the extent of hydraulic communication along the exterior of the well casing. Through analysis of this pressure test data, we are able estimate the effective permeability of the disturbed zone along the exterior of wellbore over this 11-ft section. We find the estimated range of effective permeability from the field test is consistent with laboratory analysis and bond log data. The cement interfaces with casing and/or formation are

  5. The impact of watershed management on coastal morphology: A case study using an integrated approach and numerical modeling

    NASA Astrophysics Data System (ADS)

    Samaras, Achilleas G.; Koutitas, Christopher G.

    2014-04-01

    Coastal morphology evolves as the combined result of both natural- and human- induced factors that cover a wide range of spatial and temporal scales of effect. Areas in the vicinity of natural stream mouths are of special interest, as the direct connection with the upstream watershed extends the search for drivers of morphological evolution from the coastal area to the inland as well. Although the impact of changes in watersheds on the coastal sediment budget is well established, references that study concurrently the two fields and the quantification of their connection are scarce. In the present work, the impact of land-use changes in a watershed on coastal erosion is studied for a selected site in North Greece. Applications are based on an integrated approach to quantify the impact of watershed management on coastal morphology through numerical modeling. The watershed model SWAT and a shoreline evolution model developed by the authors (PELNCON-M) are used, evaluating with the latter the performance of the three longshore sediment transport rate formulae included in the model formulation. Results document the impact of crop abandonment on coastal erosion (agricultural land decrease from 23.3% to 5.1% is accompanied by the retreat of ~ 35 m in the vicinity of the stream mouth) and show the effect of sediment transport formula selection on the evolution of coastal morphology. Analysis denotes the relative importance of the parameters involved in the dynamics of watershed-coast systems, and - through the detailed description of a case study - is deemed to provide useful insights for researchers and policy-makers involved in their study.

  6. An Automated High-Throughput Metabolic Stability Assay Using an Integrated High-Resolution Accurate Mass Method and Automated Data Analysis Software

    PubMed Central

    Shah, Pranav; Kerns, Edward; Nguyen, Dac-Trung; Obach, R. Scott; Wang, Amy Q.; Zakharov, Alexey; McKew, John; Simeonov, Anton; Hop, Cornelis E. C. A.

    2016-01-01

    Advancement of in silico tools would be enabled by the availability of data for metabolic reaction rates and intrinsic clearance (CLint) of a diverse compound structure data set by specific metabolic enzymes. Our goal is to measure CLint for a large set of compounds with each major human cytochrome P450 (P450) isozyme. To achieve our goal, it is of utmost importance to develop an automated, robust, sensitive, high-throughput metabolic stability assay that can efficiently handle a large volume of compound sets. The substrate depletion method [in vitro half-life (t1/2) method] was chosen to determine CLint. The assay (384-well format) consisted of three parts: 1) a robotic system for incubation and sample cleanup; 2) two different integrated, ultraperformance liquid chromatography/mass spectrometry (UPLC/MS) platforms to determine the percent remaining of parent compound, and 3) an automated data analysis system. The CYP3A4 assay was evaluated using two long t1/2 compounds, carbamazepine and antipyrine (t1/2 > 30 minutes); one moderate t1/2 compound, ketoconazole (10 < t1/2 < 30 minutes); and two short t1/2 compounds, loperamide and buspirone (t½ < 10 minutes). Interday and intraday precision and accuracy of the assay were within acceptable range (∼12%) for the linear range observed. Using this assay, CYP3A4 CLint and t1/2 values for more than 3000 compounds were measured. This high-throughput, automated, and robust assay allows for rapid metabolic stability screening of large compound sets and enables advanced computational modeling for individual human P450 isozymes. PMID:27417180

  7. Experimental validation of numerical study on thermoelectric-based heating in an integrated centrifugal microfluidic platform for polymerase chain reaction amplification.

    PubMed

    Amasia, Mary; Kang, Seok-Won; Banerjee, Debjyoti; Madou, Marc

    2013-01-01

    A comprehensive study involving numerical analysis and experimental validation of temperature transients within a microchamber was performed for thermocycling operation in an integrated centrifugal microfluidic platform for polymerase chain reaction (PCR) amplification. Controlled heating and cooling of biological samples are essential processes in many sample preparation and detection steps for micro-total analysis systems. Specifically, the PCR process relies on highly controllable and uniform heating of nucleic acid samples for successful and efficient amplification. In these miniaturized systems, the heating process is often performed more rapidly, making the temperature control more difficult, and adding complexity to the integrated hardware system. To gain further insight into the complex temperature profiles within the PCR microchamber, numerical simulations using computational fluid dynamics and computational heat transfer were performed. The designed integrated centrifugal microfluidics platform utilizes thermoelectrics for ice-valving and thermocycling for PCR amplification. Embedded micro-thermocouples were used to record the static and dynamic thermal responses in the experiments. The data collected was subsequently used for computational validation of the numerical predictions for the system response during thermocycling, and these simulations were found to be in agreement with the experimental data to within ∼97%. When thermal contact resistance values were incorporated in the simulations, the numerical predictions were found to be in agreement with the experimental data to within ∼99.9%. This in-depth numerical modeling and experimental validation of a complex single-sided heating platform provide insights into hardware and system design for multi-layered polymer microfluidic systems. In addition, the biological capability along with the practical feasibility of the integrated system is demonstrated by successfully performing PCR amplification of

  8. Study on the properties of the Integrated Precipitable Water (IPW) maps derived by GPS, SAR interferometry and numerical forecasting models

    NASA Astrophysics Data System (ADS)

    Mateus, Pedro; Nico, Giovanni; Tomé, Ricardo; Catalão, João.; Miranda, Pedro

    2010-05-01

    The knowledge of spatial distribution of relative changes in atmospheric Integrated Precipitable Water (IPW) density is important for climate studies and numerical weather forecasting. An increase (or decrease) of the IPW density affects the phase of electromagnetic waves. For this reason, this quantity can be measured by techniques such as GPS and space-borne SAR interferometry (InSAR). The aim of this work is to study the isotropic properties of the IPW maps obtained by GPS and SAR InSAR measurements and derived by a Numerical Weather Forecasting Model. The existence of a power law in their phase spectrum is verified. The relationship between the interferometric phase delay and the topographic height of the observed area is also investigated. The Lisbon region, Portugal, was chosen as a study area. This region is monitored by a network of GPS permanent stations covering an area of about squared kilometers. The network consists of 12 GPS stations of which 4 belonging to the Instituto Geográfico Português (IGP) and 8 to Instituto Geográfico do Exercito (IGEOE). All stations were installed between 1997 and the beginning of 2009. The GAMIT package was used to process GPS data and to estimate the total zenith delay with a temporal sampling of 15 minutes. A set of 25 SAR interferograms with a 35-day temporal baseline were processed using ASAR-ENVISAT data acquired over the Lisbon region during the period from 2003 to 2005 and from 2008 to 2009. These interferograms give an estimate of the variation of the global atmospheric delay. Terrain deformations related to known geological phenomena in the Lisbon area are negligible at this time scale of 35 days. Furthermore, two interferometric SAR images acquired by ERS-1/2 over the Lisbon region on 20/07/1995 and 21/07/1995, respectively, and so with a temporal baseline of just 1 day, were also processed. The Weather Research & Forecasting Model (WRF) was used to generate the three-dimensional fields of temperature

  9. Accurate thermoplasmonic simulation of metallic nanoparticles

    NASA Astrophysics Data System (ADS)

    Yu, Da-Miao; Liu, Yan-Nan; Tian, Fa-Lin; Pan, Xiao-Min; Sheng, Xin-Qing

    2017-01-01

    Thermoplasmonics leads to enhanced heat generation due to the localized surface plasmon resonances. The measurement of heat generation is fundamentally a complicated task, which necessitates the development of theoretical simulation techniques. In this paper, an efficient and accurate numerical scheme is proposed for applications with complex metallic nanostructures. Light absorption and temperature increase are, respectively, obtained by solving the volume integral equation (VIE) and the steady-state heat diffusion equation through the method of moments (MoM). Previously, methods based on surface integral equations (SIEs) were utilized to obtain light absorption. However, computing light absorption from the equivalent current is as expensive as O(NsNv), where Ns and Nv, respectively, denote the number of surface and volumetric unknowns. Our approach reduces the cost to O(Nv) by using VIE. The accuracy, efficiency and capability of the proposed scheme are validated by multiple simulations. The simulations show that our proposed method is more efficient than the approach based on SIEs under comparable accuracy, especially for the case where many incidents are of interest. The simulations also indicate that the temperature profile can be tuned by several factors, such as the geometry configuration of array, beam direction, and light wavelength.

  10. Integrated numerical modeling of a landslide early warning system in a context of adaptation to future climatic pressures

    NASA Astrophysics Data System (ADS)

    Khabarov, Nikolay; Huggel, Christian; Obersteiner, Michael; Ramírez, Juan Manuel

    2010-05-01

    Mountain regions are typically characterized by rugged terrain which is susceptible to different types of landslides during high-intensity precipitation. Landslides account for billions of dollars of damage and many casualties, and are expected to increase in frequency in the future due to a projected increase of precipitation intensity. Early warning systems (EWS) are thought to be a primary tool for related disaster risk reduction and climate change adaptation to extreme climatic events and hydro-meteorological hazards, including landslides. An EWS for hazards such as landslides consist of different components, including environmental monitoring instruments (e.g. rainfall or flow sensors), physical or empirical process models to support decision-making (warnings, evacuation), data and voice communication, organization and logistics-related procedures, and population response. Considering this broad range, EWS are highly complex systems, and it is therefore difficult to understand the effect of the different components and changing conditions on the overall performance, ultimately being expressed as human lives saved or structural damage reduced. In this contribution we present a further development of our approach to assess a landslide EWS in an integral way, both at the system and component level. We utilize a numerical model using 6 hour rainfall data as basic input. A threshold function based on a rainfall-intensity/duration relation was applied as a decision criterion for evacuation. Damage to infrastructure and human lives was defined as a linear function of landslide magnitude, with the magnitude modelled using a power function of landslide frequency. Correct evacuation was assessed with a ‘true' reference rainfall dataset versus a dataset of artificially reduced quality imitating the observation system component. Performance of the EWS using these rainfall datasets was expressed in monetary terms (i.e. damage related to false and correct evacuation). We

  11. Numerical Evaluation of the "Dual-Kernel Counter-flow" Matric Convolution Integral that Arises in Discrete/Continuous (D/C) Control Theory

    NASA Technical Reports Server (NTRS)

    Nixon, Douglas D.

    2009-01-01

    Discrete/Continuous (D/C) control theory is a new generalized theory of discrete-time control that expands the concept of conventional (exact) discrete-time control to create a framework for design and implementation of discretetime control systems that include a continuous-time command function generator so that actuator commands need not be constant between control decisions, but can be more generally defined and implemented as functions that vary with time across sample period. Because the plant/control system construct contains two linear subsystems arranged in tandem, a novel dual-kernel counter-flow convolution integral appears in the formulation. As part of the D/C system design and implementation process, numerical evaluation of that integral over the sample period is required. Three fundamentally different evaluation methods and associated algorithms are derived for the constant-coefficient case. Numerical results are matched against three available examples that have closed-form solutions.

  12. Primordial Black Holes from First Principles (numerics)

    NASA Astrophysics Data System (ADS)

    Bloomfield, Jolyon; Moss, Zander; Lam, Casey; Russell, Megan; Face, Stephen; Guth, Alan

    2017-01-01

    In order to compute accurate number densities and mass spectra for primordial black holes from an inflationary power spectrum, one needs to perform Monte Carlo integration over field configurations. This requires a method of determining whether a black hole will form, and if so, what its mass will be, for each sampled configuration. In order for such an integral to converge within any reasonable time, this requires a highly efficient process for making these determinations. We present a numerical pipeline that is capable of making reasonably accurate predictions for black holes and masses at the rate of a few seconds per sample (including the sampling process), utilizing a fully-nonlinear numerical relativity code in 1+1 dimensions.

  13. Numerical reconstruction of optical surfaces.

    PubMed

    Nam, Jayoung; Rubinstein, Jacob

    2008-07-01

    There are several problems in optics that involve the reconstruction of surfaces such as wavefronts, reflectors, and lenses. The reconstruction problem often leads to a system of first-order differential equations for the unknown surface. We compare several numerical methods for integrating differential equations of this kind. One class of methods involves a direct integration. It is shown that such a technique often fails in practice. We thus consider one method that provides an approximate direct integration; we show that it is always converging and that it provides a stable, accurate solution even in the presence of measurement noise. In addition, we consider a number of methods that are based on converting the original equation into a minimization problem.

  14. Numerical examination of the extended phase-space volume-preserving integrator by the Nosé-Hoover molecular dynamics equations.

    PubMed

    Queyroy, Séverine; Nakamura, Haruki; Fukuda, Ikuo

    2009-09-01

    This article illustrates practical applications to molecular dynamics simulations of the recently developed numerical integrators [Phys Rev E 2006, 73, 026703] for ordinary differential equations. This method consists of extending any set of ordinary differential equations in order to define a time invariant function, and then use the techniques of divergence-free solvable decomposition and symmetric composition to obtain volume-preserving integrators in the extended phase space. Here, we have developed the technique by constructing multiple extended-variable formalism in order to enhance the handling in actual simulation, and by constituting higher order integrators to obtain further accuracies. Using these integrators, we perform constant temperature molecular dynamics simulations of liquid water, liquid argon and peptide in liquid water droplet. The temperature control is obtained through an extended version of the Nosé-Hoover equations. Analyzing the effects of the simulation conditions including time step length, initial values, boundary conditions, and equation parameters, we investigate local accuracy, global accuracy, computational cost, and sensitivity along with the sampling validity. According to the results of these simulations, we show that the volume-preserving integrators developed by the current method are more effective than traditional integrators that lack the volume-preserving property.

  15. Numerical comparison between DHF and RHF methods

    NASA Astrophysics Data System (ADS)

    Kobus, J.; Jaskolski, W.

    1987-10-01

    A detailed numerical comparison of the Dirac-Hartree-Fock method and the relativistic Hartree-Fock (RHF) method of Cowan and Griffith (1976) is presented, considering the total energy, the orbital energies, and the one-electron and two-electron integrals. The RHF method is found to yield accurate values of the relativistic transition energies. Using accurate values of the correlation corrections for p-electron and d-electron systems, the usefulness of the RHF method in obtaining relativistic corrections to the differential term energies is demonstrated. Advantages of the method for positron scattering on heavy systems are also pointed out.

  16. Spectral-collocation variational integrators

    NASA Astrophysics Data System (ADS)

    Li, Yiqun; Wu, Boying; Leok, Melvin

    2017-03-01

    Spectral methods are a popular choice for constructing numerical approximations for smooth problems, as they can achieve geometric rates of convergence and have a relatively small memory footprint. In this paper, we introduce a general framework to convert a spectral-collocation method into a shooting-based variational integrator for Hamiltonian systems. We also compare the proposed spectral-collocation variational integrators to spectral-collocation methods and Galerkin spectral variational integrators in terms of their ability to reproduce accurate trajectories in configuration and phase space, their ability to conserve momentum and energy, as well as the relative computational efficiency of these methods when applied to some classical Hamiltonian systems. In particular, we note that spectrally-accurate variational integrators, such as the Galerkin spectral variational integrators and the spectral-collocation variational integrators, combine the computational efficiency of spectral methods together with the geometric structure-preserving and long-time structural stability properties of symplectic integrators.

  17. An integrated approach to flood hazard assessment on alluvial fans using numerical modeling, field mapping, and remote sensing

    USGS Publications Warehouse

    Pelletier, J.D.; Mayer, L.; Pearthree, P.A.; House, P.K.; Demsey, K.A.; Klawon, J.K.; Vincent, K.R.

    2005-01-01

    Millions of people in the western United States live near the dynamic, distributary channel networks of alluvial fans where flood behavior is complex and poorly constrained. Here we test a new comprehensive approach to alluvial-fan flood hazard assessment that uses four complementary methods: two-dimensional raster-based hydraulic modeling, satellite-image change detection, fieldbased mapping of recent flood inundation, and surficial geologic mapping. Each of these methods provides spatial detail lacking in the standard method and each provides critical information for a comprehensive assessment. Our numerical model simultaneously solves the continuity equation and Manning's equation (Chow, 1959) using an implicit numerical method. It provides a robust numerical tool for predicting flood flows using the large, high-resolution Digital Elevation Models (DEMs) necessary to resolve the numerous small channels on the typical alluvial fan. Inundation extents and flow depths of historic floods can be reconstructed with the numerical model and validated against field- and satellite-based flood maps. A probabilistic flood hazard map can also be constructed by modeling multiple flood events with a range of specified discharges. This map can be used in conjunction with a surficial geologic map to further refine floodplain delineation on fans. To test the accuracy of the numerical model, we compared model predictions of flood inundation and flow depths against field- and satellite-based flood maps for two recent extreme events on the southern Tortolita and Harquahala piedmonts in Arizona. Model predictions match the field- and satellite-based maps closely. Probabilistic flood hazard maps based on the 10 yr, 100 yr, and maximum floods were also constructed for the study areas using stream gage records and paleoflood deposits. The resulting maps predict spatially complex flood hazards that strongly reflect small-scale topography and are consistent with surficial geology. In

  18. Data, models, and views: towards integration of diverse numerical model components and data sets for scientific and public dissemination

    NASA Astrophysics Data System (ADS)

    Hofmeister, Richard; Lemmen, Carsten; Nasermoaddeli, Hassan; Klingbeil, Knut; Wirtz, Kai

    2015-04-01

    Data and models for describing coastal systems span a diversity of disciplines, communities, ecosystems, regions and techniques. Previous attempts of unifying data exchange, coupling interfaces, or metadata information have not been successful. We introduce the new Modular System for Shelves and Coasts (MOSSCO, http://www.mossco.de), a novel coupling framework that enables the integration of a diverse array of models and data from different disciplines relating to coastal research. In the MOSSCO concept, the integrating framework imposes very few restrictions on contributed data or models; in fact, there is no distinction made between data and models. The few requirements are: (1) principle coupleability, i.e. access to I/O and timing information in submodels, which has recently been referred to as the Basic Model Interface (BMI) (2) open source/open data access and licencing and (3) communication of metadata, such as spatiotemporal information, naming conventions, and physical units. These requirements suffice to integrate different models and data sets into the MOSSCO infrastructure and subsequently built a modular integrated modeling tool that can span a diversity of processes and domains. We demonstrate how diverse coastal system constituents were integrated into this modular framework and how we deal with the diverging development of constituent data sets and models at external institutions. Finally, we show results from simulations with the fully coupled system using OGC WebServices in the WiMo geoportal (http://kofserver3.hzg.de/wimo), from where stakeholders can view the simulation results for further dissemination.

  19. Numerical integration of gravitational field for general three-dimensional objects and its application to gravitational study of grand design spiral arm structure

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2016-12-01

    We present a method to integrate the gravitational field for general three-dimensional objects. By adopting the spherical polar coordinates centred at the evaluation point as the integration variables, we numerically compute the volume integral representation of the gravitational potential and of the acceleration vector. The variable transformation completely removes the algebraic singularities of the original integrals. The comparison with exact solutions reveals around 15 digits accuracy of the new method. Meanwhile, the six digit accuracy of the integrated gravitational field is realized by around 106 evaluations of the integrand per evaluation point, which costs at most a few seconds at a PC with Intel Core i7-4600U CPU running at 2.10 GHz clock. By using the new method, we show the gravitational field of a grand design spiral arm structure as an example. The computed gravitational field shows not only spiral shaped details but also a global feature composed of a thick oblate spheroid and a thin disc. The developed method is directly applicable to the electromagnetic field computation by means of Coulomb's law, the Biot-Savart law, and their retarded extensions. Sample FORTRAN 90 programs and test results are electronically available.

  20. Numerical computation of complex multi-body Navier-Stokes flows with applications for the integrated Space Shuttle launch vehicle

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    1993-01-01

    An enhanced grid system for the Space Shuttle Orbiter was built by integrating CAD definitions from several sources and then generating the surface and volume grids. The new grid system contains geometric components not modeled previously plus significant enhancements on geometry that has been modeled in the old grid system. The new orbiter grids were then integrated with new grids for the rest of the launch vehicle. Enhancements were made to the hyperbolic grid generator HYPGEN and new tools for grid projection, manipulation, and modification, Cartesian box grid and far field grid generation and post-processing of flow solver data were developed.

  1. A numerical method for integrating the kinetic equations of droplet spectra evolution by condensation/evaporation and by coalescence/breakup processes

    NASA Technical Reports Server (NTRS)

    Emukashvily, I. M.

    1982-01-01

    An extension of the method of moments is developed for the numerical integration of the kinetic equations of droplet spectra evolution by condensation/evaporation and by coalescence/breakup processes. The number density function n sub k (x,t) in each separate droplet packet between droplet mass grid points (x sub k, x sub k+1) is represented by an expansion in orthogonal polynomials with a given weighting function. In this way droplet number concentrations, liquid water contents and other moments in each droplet packet are conserved and the problem of solving the kinetic equations is replaced by one of solving a set of coupled differential equations for the number density function moments. The method is tested against analytic solutions of the corresponding kinetic equations. Numerical results are obtained for different coalescence/breakup and condensation/evaporation kernels and for different initial droplet spectra. Also droplet mass grid intervals, weighting functions, and time steps are varied.

  2. Solar Radiation and the UV Index: An Application of Numerical Integration, Trigonometric Functions, Online Education and the Modelling Process

    ERIC Educational Resources Information Center

    Downs, Nathan; Parisi, Alfio V.; Galligan, Linda; Turner, Joanna; Amar, Abdurazaq; King, Rachel; Ultra, Filipina; Butler, Harry

    2016-01-01

    A short series of practical classroom mathematics activities employing the use of a large and publicly accessible scientific data set are presented for use by students in years 9 and 10. The activities introduce and build understanding of integral calculus and trigonometric functions through the presentation of practical problem solving that…

  3. Numerical study identifying the factors causing the significant underestimation of the specific discharge estimated using the modified integral pumping test method in a laboratory experiment.

    PubMed

    Sun, Kerang

    2015-09-01

    A three-dimensional finite element model is constructed to simulate the experimental conditions presented in a paper published in this journal [Goltz et al., 2009. Validation of two innovative methods to measure contaminant mass flux in groundwater. Journal of Contaminant Hydrology 106 (2009) 51-61] where the modified integral pumping test (MIPT) method was found to significantly underestimate the specific discharge in an artificial aquifer. The numerical model closely replicates the experimental configuration with explicit representation of the pumping well column and skin, allowing for the model to simulate the wellbore flow in the pumping well as an integral part of the porous media flow in the aquifer using the equivalent hydraulic conductivity approach. The equivalent hydraulic conductivity is used to account for head losses due to friction within the wellbore of the pumping well. Applying the MIPT method on the model simulated piezometric heads resulted in a specific discharge that underestimates the true specific discharge in the experimental aquifer by 18.8%, compared with the 57% underestimation of mass flux by the experiment reported by Goltz et al. (2009). Alternative simulation shows that the numerical model is capable of approximately replicating the experiment results when the equivalent hydraulic conductivity is reduced by an order of magnitude, suggesting that the accuracy of the MIPT estimation could be improved by expanding the physical meaning of the equivalent hydraulic conductivity to account for other factors such as orifice losses in addition to frictional losses within the wellbore. Numerical experiments also show that when applying the MIPT method to estimate hydraulic parameters, use of depth-integrated piezometric head instead of the head near the pump intake can reduce the estimation error resulting from well losses, but not the error associated with the well not being fully screened.

  4. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  5. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  6. Numerical computation of spherical harmonics of arbitrary degree and order by extending exponent of floating point numbers: III integral

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2014-02-01

    The integrals of fully normalized associated Legendre function (fnALF) of extremely high degree and order such as 223=8 388 608 can be obtained without underflow problems if the point values of fnALF are properly given by using an exponent extension of the floating point numbers (Fukushima, T., 2012a. J. Geod., 86, 271-285; Fukushima, T., 2012c. J. Geod., 86, 1019-1028). A dynamic termination of the exponent extension during the fixed-order increasing-degree recursions significantly reduces the increase in CPU time caused by the exponent extension. Also, the sectorial integrals are found to be correctly obtained by the forward recursion only even when the backward recursion has been claimed to be necessary (Paul, M.K., 1978, Bull. Geod., 52, 177-190; Gerstl, M., 1980, Manuscr. Geod., 5, 181-199).

  7. Integration of bed characteristics, geochemical tracers, current measurements, and numerical modeling for assessing the provenance of beach sand in the San Francisco Bay Coastal System

    USGS Publications Warehouse

    Barnard, Patrick L.; Foxgrover, Amy; Elias, Edwin P.L.; Erikson, Li H.; Hein, James; McGann, Mary; Mizell, Kira; Rosenbauer, Robert J.; Swarzenski, Peter W.; Takesue, Renee K.; Wong, Florence L.; Woodrow, Don

    2013-01-01

    Over 150 million m3 of sand-sized sediment has disappeared from the central region of the San Francisco Bay Coastal System during the last half century. This enormous loss may reflect numerous anthropogenic influences, such as watershed damming, bay-fill development, aggregate mining, and dredging. The reduction in Bay sediment also appears to be linked to a reduction in sediment supply and recent widespread erosion of adjacent beaches, wetlands, and submarine environments. A unique, multi-faceted provenance study was performed to definitively establish the primary sources, sinks, and transport pathways of beach-sized sand in the region, thereby identifying the activities and processes that directly limit supply to the outer coast. This integrative program is based on comprehensive surficial sediment sampling of the San Francisco Bay Coastal System, including the seabed, Bay floor, area beaches, adjacent rock units, and major drainages. Analyses of sample morphometrics and biological composition (e.g., Foraminifera) were then integrated with a suite of tracers including 87Sr/86Sr and 143Nd/144Nd isotopes, rare earth elements, semi-quantitative X-ray diffraction mineralogy, and heavy minerals, and with process-based numerical modeling, in situ current measurements, and bedform asymmetry to robustly determine the provenance of beach-sized sand in the region.

  8. Performance of heterogeneous computing with graphics processing unit and many integrated core for hartree potential calculations on a numerical grid.

    PubMed

    Choi, Sunghwan; Kwon, Oh-Kyoung; Kim, Jaewook; Kim, Woo Youn

    2016-09-15

    We investigated the performance of heterogeneous computing with graphics processing units (GPUs) and many integrated core (MIC) with 20 CPU cores (20×CPU). As a practical example toward large scale electronic structure calculations using grid-based methods, we evaluated the Hartree potentials of silver nanoparticles with various sizes (3.1, 3.7, 4.9, 6.1, and 6.9 nm) via a direct integral method supported by the sinc basis set. The so-called work stealing scheduler was used for efficient heterogeneous computing via the balanced dynamic distribution of workloads between all processors on a given architecture without any prior information on their individual performances. 20×CPU + 1GPU was up to ∼1.5 and ∼3.1 times faster than 1GPU and 20×CPU, respectively. 20×CPU + 2GPU was ∼4.3 times faster than 20×CPU. The performance enhancement by CPU + MIC was considerably lower than expected because of the large initialization overhead of MIC, although its theoretical performance is similar with that of CPU + GPU. © 2016 Wiley Periodicals, Inc.

  9. Towards an integrated numerical simulator for crack-seal vein microstructure: Coupling phase-field with the Discrete Element Method

    NASA Astrophysics Data System (ADS)

    Virgo, Simon; Ankit, Kumar; Nestler, Britta; Urai, Janos L.

    2016-04-01

    Crack-seal veins form in a complex interplay of coupled thermal, hydraulic, mechanical and chemical processes. Their formation and cyclic growth involves brittle fracturing and dilatancy, phases of increased fluid flow and the growth of crystals that fill the voids and reestablish the mechanical strength. Existing numerical models of vein formation focus on selected aspects of the coupled process. Until today, no model exists that is able to use a realistic representation of the fracturing AND sealing processes, simultaneously. To address this challenge, we propose the bidirectional coupling of two numerical methods that have proven themselves as very powerful to model the fundamental processes acting in crack-seal systems: Phase-field and the Discrete Element Method (DEM). The phase-field Method was recently successfully extended to model the precipitation of quartz crystals from an aqueous solution and applied to model the sealing of a vein over multiple opening events (Ankit et al., 2013; Ankit et al., 2015a; Ankit et al., 2015b). The advantage over former, purely kinematic approaches is that in phase-field, the crystal growth is modeled based on thermodynamic and kinetic principles. Different driving forces for microstructure evolution, such as chemical bulk free energy, interfacial energy, elastic strain energy and different transport processes, such as mass diffusion and advection, can be coupled and the effect on the evolution process can be studied in 3D. The Discrete Element Method was already used in several studies to model the fracturing of rocks and the incremental growth of veins by repeated fracturing (Virgo et al., 2013; Virgo et al., 2014). Materials in DEM are represented by volumes of packed spherical particles and the response to the material to stress is modeled by interaction of the particles with their nearest neighbours. For rocks, in 3D, the method provides a realistic brittle failure behaviour. Exchange Routines are being developed that

  10. Predictive Modeling of Chemical Hazard by Integrating Numerical Descriptors of Chemical Structures and Short-term Toxicity Assay Data

    PubMed Central

    Rusyn, Ivan; Sedykh, Alexander; Guyton, Kathryn Z.; Tropsha, Alexander

    2012-01-01

    Quantitative structure-activity relationship (QSAR) models are widely used for in silico prediction of in vivo toxicity of drug candidates or environmental chemicals, adding value to candidate selection in drug development or in a search for less hazardous and more sustainable alternatives for chemicals in commerce. The development of traditional QSAR models is enabled by numerical descriptors representing the inherent chemical properties that can be easily defined for any number of molecules; however, traditional QSAR models often have limited predictive power due to the lack of data and complexity of in vivo endpoints. Although it has been indeed difficult to obtain experimentally derived toxicity data on a large number of chemicals in the past, the results of quantitative in vitro screening of thousands of environmental chemicals in hundreds of experimental systems are now available and continue to accumulate. In addition, publicly accessible toxicogenomics data collected on hundreds of chemicals provide another dimension of molecular information that is potentially useful for predictive toxicity modeling. These new characteristics of molecular bioactivity arising from short-term biological assays, i.e., in vitro screening and/or in vivo toxicogenomics data can now be exploited in combination with chemical structural information to generate hybrid QSAR–like quantitative models to predict human toxicity and carcinogenicity. Using several case studies, we illustrate the benefits of a hybrid modeling approach, namely improvements in the accuracy of models, enhanced interpretation of the most predictive features, and expanded applicability domain for wider chemical space coverage. PMID:22387746

  11. Numerical assessment of accurate measurements of laminar flame speed

    NASA Astrophysics Data System (ADS)

    Goulier, Joules; Bizon, Katarzyna; Chaumeix, Nabiha; Meynet, Nicolas; Continillo, Gaetano

    2016-12-01

    In combustion, the laminar flame speed constitutes an important parameter that reflects the chemistry of oxidation for a given fuel, along with its transport and thermal properties. Laminar flame speeds are used (i) in turbulent models used in CFD codes, and (ii) to validate detailed or reduced mechanisms, often derived from studies using ideal reactors and in diluted conditions as in jet stirred reactors and in shock tubes. End-users of such mechanisms need to have an assessment of their capability to predict the correct heat released by combustion in realistic conditions. In this view, the laminar flame speed constitutes a very convenient parameter, and it is then very important to have a good knowledge of the experimental errors involved with its determination. Stationary configurations (Bunsen burners, counter-flow flames, heat flux burners) or moving flames (tubes, spherical vessel, soap bubble) can be used. The spherical expanding flame configuration has recently become popular, since it can be used at high pressures and temperatures. With this method, the flame speed is not measured directly, but derived through the recording of the flame radius. The method used to process the radius history will have an impact on the estimated flame speed. Aim of this work is to propose a way to derive the laminar flame speed from experimental recording of expanding flames, and to assess the error magnitude.

  12. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  13. Accurate detection of coronary artery disease by integrated analysis of the ST-segment depression/heart rate patterns during the exercise and recovery phases of the exercise electrocardiography test.

    PubMed

    Lehtinen, R; Sievänen, H; Viik, J; Turjanmaa, V; Niemelä, K; Malmivuo, J

    1996-11-01

    In this comparative cross-sectional study, we evaluated whether a novel computerized diagnostic variable, ST-segment depression/heart rate ST/HR analysis during both the exercise and postexercise recovery phases of the exercise electrocardiography (ECG) test, can detect coronary artery disease more accurately than methods using either exercise or recovery phase alone. The study population comprised 347 clinical patients referred for a routine bicycle exercise ECG test at Tampere University Hospital, Finland. Of these, 127 had angiographically proven coronary artery disease, whereas 13 had no coronary artery disease according to angiography, 18 had no perfusion defect according to technetium-99m sestamibi single-photon emission computed tomography, and 189 were clinically normal with respect to cardiac diseases. For each patient, the maximum values of the ST/HR hysteresis, ST/HR index, end-exercise ST depression, and recovery ST depression were determined from the Mason-Likar modification of the standard 12-lead exercise electrocardiogram [aVL, aVR, and V1 excluded]. The diagnostic performance of these continuous diagnostic variables was compared by means of receiver-operating characteristic analysis. The area under the receiver-operating characteristic curve of the ST/HR hysteresis was 89%, which was significantly larger than that of the end-exercise ST depression (76%, p < or = 0.0001), recovery ST depression (84%, p = 0.0063), or ST/HR index (83%, p = 0.0023), indicating superior diagnostic performance of the ST/HR hysteresis independent of the partition value selection. In conclusion, computerized analysis of the HR-adjusted ST depression pattern during the exercise phase, integrated with the HR-adjusted ST depression pattern during the recovery phase after exercise, can significantly improve the diagnostic performance and clinical utility of the exercise ECG test for the detection of coronary artery disease.

  14. Numerical calculations of two dimensional, unsteady transonic flows with circulation

    NASA Technical Reports Server (NTRS)

    Beam, R. M.; Warming, R. F.

    1974-01-01

    The feasibility of obtaining two-dimensional, unsteady transonic aerodynamic data by numerically integrating the Euler equations is investigated. An explicit, third-order-accurate, noncentered, finite-difference scheme is used to compute unsteady flows about airfoils. Solutions for lifting and nonlifting airfoils are presented and compared with subsonic linear theory. The applicability and efficiency of the numerical indicial function method are outlined. Numerically computed subsonic and transonic oscillatory aerodynamic coefficients are presented and compared with those obtained from subsonic linear theory and transonic wind-tunnel data.

  15. Use of integrated analogue and numerical modelling to predict tridimensional fracture intensity in fault-related-folds.

    NASA Astrophysics Data System (ADS)

    Pizzati, Mattia; Cavozzi, Cristian; Magistroni, Corrado; Storti, Fabrizio

    2016-04-01

    Fracture density pattern predictions with low uncertainty is a fundamental issue for constraining fluid flow pathways in thrust-related anticlines in the frontal parts of thrust-and-fold belts and accretionary prisms, which can also provide plays for hydrocarbon exploration and development. Among the drivers that concur to determine the distribution of fractures in fold-and-thrust-belts, the complex kinematic pathways of folded structures play a key role. In areas with scarce and not reliable underground information, analogue modelling can provide effective support for developing and validating reliable hypotheses on structural architectures and their evolution. In this contribution, we propose a working method that combines analogue and numerical modelling. We deformed a sand-silicone multilayer to eventually produce a non-cylindrical thrust-related anticline at the wedge toe, which was our test geological structure at the reservoir scale. We cut 60 serial cross-sections through the central part of the deformed model to analyze faults and folds geometry using dedicated software (3D Move). The cross-sections were also used to reconstruct the 3D geometry of reference surfaces that compose the mechanical stratigraphy thanks to the use of the software GoCad. From the 3D model of the experimental anticline, by using 3D Move it was possible to calculate the cumulative stress and strain underwent by the deformed reference layers at the end of the deformation and also in incremental steps of fold growth. Based on these model outputs it was also possible to predict the orientation of three main fractures sets (joints and conjugate shear fractures) and their occurrence and density on model surfaces. The next step was the upscaling of the fracture network to the entire digital model volume, to create DFNs.

  16. Graphical arterial blood gas visualization tool supports rapid and accurate data interpretation.

    PubMed

    Doig, Alexa K; Albert, Robert W; Syroid, Noah D; Moon, Shaun; Agutter, Jim A

    2011-04-01

    A visualization tool that integrates numeric information from an arterial blood gas report with novel graphics was designed for the purpose of promoting rapid and accurate interpretation of acid-base data. A study compared data interpretation performance when arterial blood gas results were presented in a traditional numerical list versus the graphical visualization tool. Critical-care nurses (n = 15) and nursing students (n = 15) were significantly more accurate identifying acid-base states and assessing trends in acid-base data when using the graphical visualization tool. Critical-care nurses and nursing students using traditional numerical data had an average accuracy of 69% and 74%, respectively. Using the visualization tool, average accuracy improved to 83% for critical-care nurses and 93% for nursing students. Analysis of response times demonstrated that the visualization tool might help nurses overcome the "speed/accuracy trade-off" during high-stress situations when rapid decisions must be rendered. Perceived mental workload was significantly reduced for nursing students when they used the graphical visualization tool. In this study, the effects of implementing the graphical visualization were greater for nursing students than for critical-care nurses, which may indicate that the experienced nurses needed more training and use of the new technology prior to testing to show similar gains. Results of the objective and subjective evaluations support the integration of this graphical visualization tool into clinical environments that require accurate and timely interpretation of arterial blood gas data.

  17. Calculating Exchange Times in a Scottish Fjord Using a Two-dimensional, Laterally-integrated Numerical Model

    NASA Astrophysics Data System (ADS)

    Gillibrand, P. A.

    2001-10-01

    In order to assess the potential impact of pollutants, particularly soluble wastes discharged by the mariculture industry, on the fjordic sea loch environment in Scotland, simple management models have been developed which estimate steady-state concentrations based on the quantities of effluent released and the residence time of such material within a loch. These models make various simplifications about the hydrodynamic characteristics of Scottish sea lochs, the most important of which is the concept of an exchange time which parametrizes the rate at which pollutants are removed from the system. Exchange times for individual lochs are calculated using the tidal prism method, which has some well-known shortcomings. In this paper, a two-dimensional laterally-integrated circulation model is used to investigate the exchange characteristics of Loch Fyne and its sub-basins. By simulating the transport of a passive, conservative tracer, the turnover times for the loch, two sub-basins and various depth layers are calculated. By varying the starting time of the tracer simulations, the variability in the exchange times is examined. The results from the circulation model are compared with the estimates given by the tidal prism method. The results show that the tidal prism method consistently underestimates the exchange times, although the predicted times tend to lie within the range of the simulated times. Including a simple return flow factor into the tidal prism estimate leads to significant improvements in the comparison.

  18. Numerical solution of a diffusion problem by exponentially fitted finite difference methods.

    PubMed

    D'Ambrosio, Raffaele; Paternoster, Beatrice

    2014-01-01

    This paper is focused on the accurate and efficient solution of partial differential differential equations modelling a diffusion problem by means of exponentially fitted finite difference numerical methods. After constructing and analysing special purpose finite differences for the approximation of second order partial derivatives, we employed them in the numerical solution of a diffusion equation with mixed boundary conditions. Numerical experiments reveal that a special purpose integration, both in space and in time, is more accurate and efficient than that gained by employing a general purpose solver.

  19. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE ...

    EPA Pesticide Factsheets

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with P significantly reduced the bioavailability of Pb. The bioaccessibility of the Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter 24%, or present as Pb sulfate 18%. Ad

  20. Important Nearby Galaxies without Accurate Distances

    NASA Astrophysics Data System (ADS)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  1. Integration

    ERIC Educational Resources Information Center

    Kalyn, Brenda

    2006-01-01

    Integrated learning is an exciting adventure for both teachers and students. It is not uncommon to observe the integration of academic subjects such as math, science, and language arts. However, educators need to recognize that movement experiences in physical education also can be linked to academic curricula and, may even lead the…

  2. Integrated Water Flow Model (IWFM), A Tool For Numerically Simulating Linked Groundwater, Surface Water And Land-Surface Hydrologic Processes

    NASA Astrophysics Data System (ADS)

    Dogrul, E. C.; Brush, C. F.; Kadir, T. N.

    2006-12-01

    The Integrated Water Flow Model (IWFM) is a comprehensive input-driven application for simulating groundwater flow, surface water flow and land-surface hydrologic processes, and interactions between these processes, developed by the California Department of Water Resources (DWR). IWFM couples a 3-D finite element groundwater flow process and 1-D land surface, lake, stream flow and vertical unsaturated-zone flow processes which are solved simultaneously at each time step. The groundwater flow system is simulated as a multilayer aquifer system with a mixture of confined and unconfined aquifers separated by semiconfining layers. The groundwater flow process can simulate changing aquifer conditions (confined to unconfined and vice versa), subsidence, tile drains, injection wells and pumping wells. The land surface process calculates elemental water budgets for agricultural, urban, riparian and native vegetation classes. Crop water demands are dynamically calculated using distributed soil properties, land use and crop data, and precipitation and evapotranspiration rates. The crop mix can also be automatically modified as a function of pumping lift using logit functions. Surface water diversions and groundwater pumping can each be specified, or can be automatically adjusted at run time to balance water supply with water demand. The land-surface process also routes runoff to streams and deep percolation to the unsaturated zone. Surface water networks are specified as a series of stream nodes (coincident with groundwater nodes) with specified bed elevation, conductance and stage-flow relationships. Stream nodes are linked to form stream reaches. Stream inflows at the model boundary, surface water diversion locations, and one or more surface water deliveries per location are specified. IWFM routes stream flows through the network, calculating groundwater-surface water interactions, accumulating inflows from runoff, and allocating available stream flows to meet specified or

  3. The Cenozoic fold-and-thrust belt of Eastern Sardinia: Evidences from the integration of field data with numerically balanced geological cross section

    NASA Astrophysics Data System (ADS)

    Arragoni, S.; Maggi, M.; Cianfarra, P.; Salvini, F.

    2016-06-01

    Newly collected structural data in Eastern Sardinia (Italy) integrated with numerical techniques led to the reconstruction of a 2-D admissible and balanced model revealing the presence of a widespread Cenozoic fold-and-thrust belt. The model was achieved with the FORC software, obtaining a 3-D (2-D + time) numerical reconstruction of the continuous evolution of the structure through time. The Mesozoic carbonate units of Eastern Sardinia and their basement present a fold-and-thrust tectonic setting, with a westward direction of tectonic transport (referred to the present-day coordinates). The tectonic style of the upper levels is thin skinned, with flat sectors prevailing over ramps and younger-on-older thrusts. Three regional tectonic units are present, bounded by two regional thrusts. Strike-slip faults overprint the fold-and-thrust belt and developed during the Sardinia-Corsica Block rotation along the strike of the preexisting fault ramps, not affecting the numerical section balancing. This fold-and-thrust belt represents the southward prosecution of the Alpine Corsica collisional chain and the missing link between the Alpine Chain and the Calabria-Peloritani Block. Relative ages relate its evolution to the meso-Alpine event (Eocene-Oligocene times), prior to the opening of the Tyrrhenian Sea (Tortonian). Results fill a gap of information about the geodynamic evolution of the European margin in Central Mediterranean, between Corsica and the Calabria-Peloritani Block, and imply the presence of remnants of this double-verging belt, missing in the Southern Tyrrhenian basin, within the Southern Apennine chain. The used methodology proved effective for constraining balanced cross sections also for areas lacking exposures of the large-scale structures, as the case of Eastern Sardinia.

  4. Assessment of vulnerability in karst aquifers using a quantitative integrated numerical model: catchment characterization and high resolution monitoring - Application to semi-arid regions- Lebanon.

    NASA Astrophysics Data System (ADS)

    Doummar, Joanna; Aoun, Michel; Andari, Fouad

    2016-04-01

    Karst aquifers are highly heterogeneous and characterized by a duality of recharge (concentrated; fast versus diffuse; slow) and a duality of flow which directly influences groundwater flow and spring responses. Given this heterogeneity in flow and infiltration, karst aquifers do not always obey standard hydraulic laws. Therefore the assessment of their vulnerability reveals to be challenging. Studies have shown that vulnerability of aquifers is highly governed by recharge to groundwater. On the other hand specific parameters appear to play a major role in the spatial and temporal distribution of infiltration on a karst system, thus greatly influencing the discharge rates observed at a karst spring, and consequently the vulnerability of a spring. This heterogeneity can only be depicted using an integrated numerical model to quantify recharge spatially and assess the spatial and temporal vulnerability of a catchment for contamination. In the framework of a three-year PEER NSF/USAID funded project, the vulnerability of a karst catchment in Lebanon is assessed quantitatively using a numerical approach. The aim of the project is also to refine actual evapotranspiration rates and spatial recharge distribution in a semi arid environment. For this purpose, a monitoring network was installed since July 2014 on two different pilot karst catchment (drained by Qachqouch Spring and Assal Spring) to collect high resolution data to be used in an integrated catchment numerical model with MIKE SHE, DHI including climate, unsaturated zone, and saturated zone. Catchment characterization essential for the model included geological mapping and karst features (e.g., dolines) survey as they contribute to fast flow. Tracer experiments were performed under different flow conditions (snow melt and low flow) to delineate the catchment area, reveal groundwater velocities and response to snowmelt events. An assessment of spring response after precipitation events allowed the estimation of the

  5. Numerical solutions of nonlinear wave equations

    SciTech Connect

    Kouri, D.J.; Zhang, D.S.; Wei, G.W.; Konshak, T.; Hoffman, D.K.

    1999-01-01

    Accurate, stable numerical solutions of the (nonlinear) sine-Gordon equation are obtained with particular consideration of initial conditions that are exponentially close to the phase space homoclinic manifolds. Earlier local, grid-based numerical studies have encountered difficulties, including numerically induced chaos for such initial conditions. The present results are obtained using the recently reported distributed approximating functional method for calculating spatial derivatives to high accuracy and a simple, explicit method for the time evolution. The numerical solutions are chaos-free for the same conditions employed in previous work that encountered chaos. Moreover, stable results that are free of homoclinic-orbit crossing are obtained even when initial conditions are within 10{sup {minus}7} of the phase space separatrix value {pi}. It also is found that the present approach yields extremely accurate solutions for the Korteweg{endash}de Vries and nonlinear Schr{umlt o}dinger equations. Our results support Ablowitz and co-workers{close_quote} conjecture that ensuring high accuracy of spatial derivatives is more important than the use of symplectic time integration schemes for solving solitary wave equations. {copyright} {ital 1999} {ital The American Physical Society}

  6. Numerical modeling of turbulent supersonic reacting coaxial jets

    NASA Technical Reports Server (NTRS)

    Eklund, Dean R.; Hassan, H. A.; Drummond, J. Philip

    1989-01-01

    The paper considers the mixing and subsequent combustion within turbulent reacting shear layers. A computer program was developed to solve the axisymmetric Reynolds averaged Navier-Stokes equations. The numerical method integrates the Reynolds averaged Navier-Stokes equations using a finite volume approach while advancing the solution forward in time using a Runge-Kutta scheme. Three separate flowfields are investigated and it is found that no single turbulence model considered could accurately predict the degree of mixing for all three cases.

  7. Observation and Modeling of Heat, Water and Carbon Dioxide Fluxes upon The Paddy Field for Development of The Numerical Integrate Agro-Ecosystem Simulator

    NASA Astrophysics Data System (ADS)

    Kim, W.; Komori, D.; Yokozawa, M.; Kanae, S.; Oki, T.

    2006-12-01

    The spatiotemporal fluctuation of water resources according to land use and climate changes have a demoralizing influence upon the potential area and the feasible period for paddy cultivation. To understand the syndrome, an agroecohydrological model which is considered both the characteristics of the cultivate method and the changes of land use around themselves or those upstream is indispensable. In consequence, the development of the Numerical Integrate Agro-Ecosystem Simulator (NIAES) which is based on a land surface model (LSM) and a real time monitoring and simulation system (RTMASS) has been explored. As a first step, some candidate LSMs are tested, validated and modified for NIAES model with micrometeorological flux tower data (Jun 2005 to May 2006) at Sukhothai in Thailand. The simulated trend of heat, water and carbon dioxide (CO2) fluxes are valid during the cultivate period of paddy (LAI > 2), but are disagree with those of measurements during the watering and the early period (LAI <= 2). The disagreements suggest that the paddy water is considered as an important parameter and the soil water conductivity is also estimated as a temporal variable in the LSMs to explain the land surface process of heat, water and CO2. During the non-cultivate period, the clear estimation of parameters for substituted grass type and the precise simulation of dewfall for dawn were key points to understand the characteristics of those fluxes in Indochina peninsula.

  8. A new numerical approach to solve Thomas-Fermi model of an atom using bio-inspired heuristics integrated with sequential quadratic programming.

    PubMed

    Raja, Muhammad Asif Zahoor; Zameer, Aneela; Khan, Aziz Ullah; Wazwaz, Abdul Majid

    2016-01-01

    In this study, a novel bio-inspired computing approach is developed to analyze the dynamics of nonlinear singular Thomas-Fermi equation (TFE) arising in potential and charge density models of an atom by exploiting the strength of finite difference scheme (FDS) for discretization and optimization through genetic algorithms (GAs) hybrid with sequential quadratic programming. The FDS procedures are used to transform the TFE differential equations into a system of nonlinear equations. A fitness function is constructed based on the residual error of constituent equations in the mean square sense and is formulated as the minimization problem. Optimization of parameters for the system is carried out with GAs, used as a tool for viable global search integrated with SQP algorithm for rapid refinement of the results. The design scheme is applied to solve TFE for five different scenarios by taking various step sizes and different input intervals. Comparison of the proposed results with the state of the art numerical and analytical solutions reveals that the worth of our scheme in terms of accuracy and convergence. The reliability and effectiveness of the proposed scheme are validated through consistently getting optimal values of statistical performance indices calculated for a sufficiently large number of independent runs to establish its significance.

  9. Kinetics of batch anaerobic co-digestion of poultry litter and wheat straw including a novel strategy of estimation of endogenous decay and yield coefficients using numerical integration.

    PubMed

    Shen, Jiacheng; Zhu, Jun

    2016-10-01

    The kinetics of anaerobic co-digestion of poultry litter and wheat straw has not been widely reported in the literature. Since endogenous decay and yield coefficients are two basic parameters for the design of anaerobic digesters, they are currently estimated only by continues experiments. In this study, numerical integration was employed to develop a novel strategy to estimate endogenous decay and yield coefficients using initial and final liquid data combined with methane volumes produced over time in batch experiments. To verify this method, the kinetics of batch anaerobic co-digestion of poultry litter and wheat straw at different TS and VS levels was investigated, with the corresponding endogenous decay and (non-observed) yield coefficients in the exponential periods determined to be between 0.74 × 10(-3) and 6.1 × 10(-3) d(-1), and between 0.0259 and 0.108 g VSS (g VS)(-1), respectively. A general Gompertz model developed early for bio-product could be used to simulate the methane volume profile in the co-digestion. The same model parameters obtained from the methane model combined with the corresponding yield coefficients could also be used to describe the VSS generation and VS destruction.

  10. Self-Adaptive Filon's Integration Method and Its Application to Computing Synthetic Seismograms

    NASA Astrophysics Data System (ADS)

    Zhang, Hai-Ming; Chen, Xiao-Fei

    2001-03-01

    Based on the principle of the self-adaptive Simpson integration method, and by incorporating the `fifth-order' Filon's integration algorithm [Bull. Seism. Soc. Am. 73(1983)913], we have proposed a simple and efficient numerical integration method, i.e., the self-adaptive Filon's integration method (SAFIM), for computing synthetic seismograms at large epicentral distances. With numerical examples, we have demonstrated that the SAFIM is not only accurate but also very efficient. This new integration method is expected to be very useful in seismology, as well as in computing similar oscillatory integrals in other branches of physics.

  11. An accurate moving boundary formulation in cut-cell methods

    NASA Astrophysics Data System (ADS)

    Schneiders, Lennart; Hartmann, Daniel; Meinke, Matthias; Schröder, Wolfgang

    2013-02-01

    A cut-cell method for Cartesian meshes to simulate viscous compressible flows with moving boundaries is presented. We focus on eliminating unphysical oscillations occurring in Cartesian grid methods extended to moving-boundary problems. In these methods, cells either lie completely in the fluid or solid region or are intersected by the boundary. For the latter cells, the time dependent volume fraction lying in the fluid region can be so small that explicit time-integration schemes become unstable and a special treatment of these cells is necessary. When the boundary moves, a fluid cell may become a cut cell or a solid cell may become a small cell at the next time level. This causes an abrupt change in the discretization operator and a suddenly modified truncation error of the numerical scheme. This temporally discontinuous alteration is shown to act like an unphysical source term, which deteriorates the numerical solution, i.e., it generates unphysical oscillations in the hydrodynamic forces exerted on the moving boundary. We develop an accurate moving boundary formulation based on the varying discretization operators yielding a cut-cell method which avoids these discontinuities. Results for canonical two- and three-dimensional test cases evidence the accuracy and robustness of the newly developed scheme.

  12. An equivalent domain integral for analysis of two-dimensional mixed mode problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Shivakumar, K. N.

    1989-01-01

    An equivalent domain integral (EDI) method for calculating J-integrals for two-dimensional cracked elastic bodies subjected to mixed mode loading is presented. The total and product integrals consist of the sum of an area or domain integral and line integrals on the crack faces. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented. The procedure that uses the symmetric and antisymmetric components of the stress and displacement fields to calculate the individual modes gave accurate values of the integrals for all the problems analyzed.

  13. Magnitude knowledge: the common core of numerical development.

    PubMed

    Siegler, Robert S

    2016-05-01

    The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: (1) representing increasingly precisely the magnitudes of non-symbolic numbers, (2) connecting small symbolic numbers to their non-symbolic referents, (3) extending understanding from smaller to larger whole numbers, and (4) accurately representing the magnitudes of rational numbers. The present review identifies substantial commonalities, as well as differences, in these four aspects of numerical development. With both whole and rational numbers, numerical magnitude knowledge is concurrently correlated with, longitudinally predictive of, and causally related to multiple aspects of mathematical understanding, including arithmetic and overall math achievement. Moreover, interventions focused on increasing numerical magnitude knowledge often generalize to other aspects of mathematics. The cognitive processes of association and analogy seem to play especially large roles in this development. Thus, acquisition of numerical magnitude knowledge can be seen as the common core of numerical development.

  14. Integration of Electric Resistivity Profile and Infiltrometer Measurements to Calibrate a Numerical Model of Vertical Flow in Fractured and Karstic Limestone.

    NASA Astrophysics Data System (ADS)

    Caputo, M. C.; de Carlo, L.; Masciopinto, C.; Nimmo, J. R.

    2007-12-01

    Karstic and fractured aquifers are among the most important drinking water resources. At the same time, they are particularly vulnerable to contamination. A detailed scientific knowledge of the behavior of these aquifers is essential for the development of sustainable groundwater management concepts. Due to their special characteristics of extreme anisotropy and heterogeneity, research aimed at a better understanding of flow, solute transport, and biological processes in these hydrogeologic systems is an important scientific challenge. This study integrates a geophysical technique with an infiltrometer test to better calibrate a mathematical model that quantifies the vertical flow in karstic and fractured limestone overlying the deep aquifer of Alta Murgia (Southern Italy). Knowledge of the rate of unsaturated zone percolation is needed to investigate the vertical migration of pollutants and the vulnerability of the aquifer. Sludge waste deposits in the study area have caused soil-subsoil contamination with toxics. The experimental test consisted of infiltrometer flow measurements, more commonly utilized for unconsolidated granular porous media, during which subsoil electric resistivity data were collected. A ring infiltrometer 2 m in diameter and 0.3 m high was sealed to the ground with gypsum. This large diameter yielded infiltration data representative of the anisotropic and heterogeneous rock, which could not be sampled adequately with a small ring. The subsurface resistivity was measured using a Wenner-Schlumberger electrode array. Vertical movement of water in a fracture plane under unsaturated conditions has been investigated by means of a numerical model. The finite difference method was used to solve the flow equations. An internal iteration method was used at every time step to evaluate the nodal value of the pressure head, in agreement with the mass- balance equation and the characteristic functional relationships of the coefficients.

  15. Investigation of Geomorphic and Seismic Effects on the 1959 Madison Canyon, Montana, Landslide Using an Integrated Field, Engineering Geomorphology Mapping, and Numerical Modelling Approach

    NASA Astrophysics Data System (ADS)

    Wolter, A.; Gischig, V.; Stead, D.; Clague, J. J.

    2016-06-01

    We present an integrated approach to investigate the seismically triggered Madison Canyon landslide (volume = 20 Mm3), which killed 26 people in Montana, USA, in 1959. We created engineering geomorphological maps and conducted field surveys, long-range terrestrial digital photogrammetry, and preliminary 2D numerical modelling with the objective of determining the conditioning factors, mechanisms, movement behaviour, and evolution of the failure. We emphasise the importance of both endogenic (i.e. seismic) and exogenic (i.e. geomorphic) processes in conditioning the slope for failure and hypothesise a sequence of events based on the morphology of the deposit and seismic modelling. A section of the slope was slowly deforming before a magnitude-7.5 earthquake with an epicentre 30 km away triggered the catastrophic failure in August 1959. The failed rock mass rapidly fragmented as it descended the slope towards Madison River. Part of the mass remained relatively intact as it moved on a layer of pulverised debris. The main slide was followed by several debris slides, slumps, and rockfalls. The slide debris was extensively modified soon after the disaster by the US Army Corps of Engineers to provide a stable outflow channel from newly formed Earthquake Lake. Our modelling and observations show that the landslide occurred as a result of long-term damage of the slope induced by fluvial undercutting, erosion, weathering, and past seismicity, and due to the short-term triggering effect of the 1959 earthquake. Static models suggest the slope was stable prior to the 1959 earthquake; failure would have required a significant reduction in material strength. Preliminary dynamic models indicate that repeated seismic loading was a critical process for catastrophic failure. Although the ridge geometry and existing tension cracks in the initiation zone amplified ground motions, the most important factors in initiating failure were pre-existing discontinuities and seismically induced

  16. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  17. An equivalent domain integral method in the two-dimensional analysis of mixed mode crack problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Shivakumar, K. N.

    1990-01-01

    An equivalent domain integral (EDI) method for calculating J-integrals for two-dimensional cracked elastic bodies is presented. The details of the method and its implementation are presented for isoparametric elements. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented.

  18. Application of boundary integral method to elastic analysis of V-notched beams

    NASA Technical Reports Server (NTRS)

    Rzasnicki, W.; Mendelson, A.; Albers, L. U.

    1973-01-01

    A semidirect boundary integral method, using Airy's stress function and its derivatives in Green's boundary integral formula, is used to obtain an accurate numerical solution for elastic stress and strain fields in V-notched beams in pure bending. The proper choice of nodal spacing on the boundary is shown to be necessary to achieve an accurate stress field in the vicinity of the tip of the notch. Excellent agreement is obtained with the results of the collocation method of solution.

  19. Numerical estimation of densities

    NASA Astrophysics Data System (ADS)

    Ascasibar, Y.; Binney, J.

    2005-01-01

    We present a novel technique, dubbed FIESTAS, to estimate the underlying density field from a discrete set of sample points in an arbitrary multidimensional space. FIESTAS assigns a volume to each point by means of a binary tree. Density is then computed by integrating over an adaptive kernel. As a first test, we construct several Monte Carlo realizations of a Hernquist profile and recover the particle density in both real and phase space. At a given point, Poisson noise causes the unsmoothed estimates to fluctuate by a factor of ~2 regardless of the number of particles. This spread can be reduced to about 1dex (~26 per cent) by our smoothing procedure. The density range over which the estimates are unbiased widens as the particle number increases. Our tests show that real-space densities obtained with an SPH kernel are significantly more biased than those yielded by FIESTAS. In phase space, about 10 times more particles are required in order to achieve a similar accuracy. As a second application we have estimated phase-space densities in a dark matter halo from a cosmological simulation. We confirm the results of Arad, Dekel & Klypin that the highest values of f are all associated with substructure rather than the main halo, and that the volume function v(f) ~f-2.5 over about four orders of magnitude in f. We show that a modified version of the toy model proposed by Arad et al. explains this result and suggests that the departures of v(f) from power-law form are not mere numerical artefacts. We conclude that our algorithm accurately measures the phase-space density up to the limit where discreteness effects render the simulation itself unreliable. Computationally, FIESTAS is orders of magnitude faster than the method based on Delaunay tessellation that Arad et al. employed, making it practicable to recover smoothed density estimates for sets of 109 points in six dimensions.

  20. Benchmark values for molecular two-electron integrals arising from the Dirac equation.

    PubMed

    Bağcı, A; Hoggan, P E

    2015-02-01

    The two-center two-electron Coulomb and hybrid integrals arising in relativistic and nonrelativistic ab initio calculations on molecules are evaluated. Compact, arbitrarily accurate expressions are obtained. They are expressed through molecular auxiliary functions and evaluated with the numerical Global-adaptive method for arbitrary values of parameters in the noninteger Slater-type orbitals. Highly accurate benchmark values are presented for these integrals. The convergence properties of new molecular auxiliary functions are investigated. The comparison for two-center two-electron integrals is made with results obtained from single center expansions by translation of the wave function to a single center with integer principal quantum numbers and results obtained from the Cuba numerical integration algorithm, respectively. The procedures discussed in this work are capable of yielding highly accurate two-center two-electron integrals for all ranges of orbital parameters.

  1. Benchmark values for molecular two-electron integrals arising from the Dirac equation

    NASA Astrophysics Data System (ADS)

    Baǧcı, A.; Hoggan, P. E.

    2015-02-01

    The two-center two-electron Coulomb and hybrid integrals arising in relativistic and nonrelativistic ab initio calculations on molecules are evaluated. Compact, arbitrarily accurate expressions are obtained. They are expressed through molecular auxiliary functions and evaluated with the numerical Global-adaptive method for arbitrary values of parameters in the noninteger Slater-type orbitals. Highly accurate benchmark values are presented for these integrals. The convergence properties of new molecular auxiliary functions are investigated. The comparison for two-center two-electron integrals is made with results obtained from single center expansions by translation of the wave function to a single center with integer principal quantum numbers and results obtained from the Cuba numerical integration algorithm, respectively. The procedures discussed in this work are capable of yielding highly accurate two-center two-electron integrals for all ranges of orbital parameters.

  2. Studies regarding the quality of numerical weather forecasts of the WRF model integrated at high-resolutions for the Romanian territory

    SciTech Connect

    Iriza, Amalia; Dumitrache, Rodica C.; Lupascu, Aurelia; Stefan, Sabina

    2016-01-01

    Our paper aims to evaluate the quality of high-resolution weather forecasts from the Weather Research and Forecasting (WRF) numerical weather prediction model. The lateral and boundary conditions were obtained from the numerical output of the Consortium for Small-scale Modeling (COSMO) model at 7 km horizontal resolution. Furthermore, the WRF model was run for January and July 2013 at two horizontal resolutions (3 and 1 km). The numerical forecasts of the WRF model were evaluated using different statistical scores for 2 m temperature and 10 m wind speed. Our results showed a tendency of the WRF model to overestimate the values of the analyzed parameters in comparison to observations.

  3. Determining the Numerical Stability of Quantum Chemistry Algorithms.

    PubMed

    Knizia, Gerald; Li, Wenbin; Simon, Sven; Werner, Hans-Joachim

    2011-08-09

    We present a simple, broadly applicable method for determining the numerical properties of quantum chemistry algorithms. The method deliberately introduces random numerical noise into computations, which is of the same order of magnitude as the floating point precision. Accordingly, repeated runs of an algorithm give slightly different results, which can be analyzed statistically to obtain precise estimates of its numerical stability. This noise is produced by automatic code injection into regular compiler output, so that no substantial programming effort is required, only a recompilation of the affected program sections. The method is applied to investigate: (i) the numerical stability of the three-center Obara-Saika integral evaluation scheme for high angular momenta, (ii) if coupled cluster perturbative triples can be evaluated with single precision arithmetic, (iii) how to implement the density fitting approximation in Møller-Plesset perturbation theory (MP2) most accurately, and (iv) which parts of density fitted MP2 can be safely evaluated with single precision arithmetic. In the integral case, we find a numerical instability in an equation that is used in almost all integral programs. Due to the results of (ii) and (iv), we conjecture that single precision arithmetic can be applied whenever a calculation is done in an orthogonal basis set and excessively long linear sums are avoided.

  4. Numerical evaluation of the incomplete airy functions and their application to high frequency scattering and diffraction

    NASA Technical Reports Server (NTRS)

    Constantinides, E. D.; Marhefka, R. J.

    1992-01-01

    The incomplete Airy integrals serve as canonical functions for the uniform ray optical solutions to several high frequency scattering and diffraction problems that involve a class of integrals characterized by two stationary points that are arbitrarily close to one another or to an integration endpoint. Integrals of such analytical properties describe transition region phenomena associated with composite shadow boundaries. An efficient and accurate method for computing the incomplete Airy functions would make the solutions to such problems useful for engineering purposes. Here, a convergent series solution form for the incomplete Airy functions is derived. Asymptotic expansions involving several terms were also developed and serve as large argument approximations. The combination of the series solution form with the asymptotic formulae provides for an efficient and accurate computation of the incomplete Airy functions. Validation of accuracy is accomplished using direct numerical integration data.

  5. A RANS/DES Numerical Procedure for Axisymmetric Flows with and without Strong Rotation

    SciTech Connect

    Andrade, Andrew Jacob

    2007-01-01

    A RANS/DES numerical procedure with an extended Lax-Wendroff control-volume scheme and turbulence model is described for the accurate simulation of internal/external axisymmetric flow with and without strong rotation. This new procedure is an extension, from Cartesian to cylindrical coordinates, of (1) a second order accurate multi-grid, control-volume integration scheme, and (2) a k-ω turbulence model. This paper outlines both the axisymmetric corrections to the mentioned numerical schemes and the developments of techniques pertaining to numerical dissipation, multi-block connectivity, parallelization, etc. Furthermore, analytical and experimental case studies are presented to demonstrate accuracy and computational efficiency. Notes are also made toward numerical stability of highly rotational flows.

  6. GO2OGS 1.0: a versatile workflow to integrate complex geological information with fault data into numerical simulation models

    NASA Astrophysics Data System (ADS)

    Fischer, T.; Naumov, D.; Sattler, S.; Kolditz, O.; Walther, M.

    2015-11-01

    We offer a versatile workflow to convert geological models built with the ParadigmTM GOCAD© (Geological Object Computer Aided Design) software into the open-source VTU (Visualization Toolkit unstructured grid) format for usage in numerical simulation models. Tackling relevant scientific questions or engineering tasks often involves multidisciplinary approaches. Conversion workflows are needed as a way of communication between the diverse tools of the various disciplines. Our approach offers an open-source, platform-independent, robust, and comprehensible method that is potentially useful for a multitude of environmental studies. With two application examples in the Thuringian Syncline, we show how a heterogeneous geological GOCAD model including multiple layers and faults can be used for numerical groundwater flow modeling, in our case employing the OpenGeoSys open-source numerical toolbox for groundwater flow simulations. The presented workflow offers the chance to incorporate increasingly detailed data, utilizing the growing availability of computational power to simulate numerical models.

  7. A new approach to compute accurate velocity of meteors

    NASA Astrophysics Data System (ADS)

    Egal, Auriane; Gural, Peter; Vaubaillon, Jeremie; Colas, Francois; Thuillot, William

    2016-10-01

    The CABERNET project was designed to push the limits of meteoroid orbit measurements by improving the determination of the meteors' velocities. Indeed, despite of the development of the cameras networks dedicated to the observation of meteors, there is still an important discrepancy between the measured orbits of meteoroids computed and the theoretical results. The gap between the observed and theoretic semi-major axis of the orbits is especially significant; an accurate determination of the orbits of meteoroids therefore largely depends on the computation of the pre-atmospheric velocities. It is then imperative to dig out how to increase the precision of the measurements of the velocity.In this work, we perform an analysis of different methods currently used to compute the velocities and trajectories of the meteors. They are based on the intersecting planes method developed by Ceplecha (1987), the least squares method of Borovicka (1990), and the multi-parameter fitting (MPF) method published by Gural (2012).In order to objectively compare the performances of these techniques, we have simulated realistic meteors ('fakeors') reproducing the different error measurements of many cameras networks. Some fakeors are built following the propagation models studied by Gural (2012), and others created by numerical integrations using the Borovicka et al. 2007 model. Different optimization techniques have also been investigated in order to pick the most suitable one to solve the MPF, and the influence of the geometry of the trajectory on the result is also presented.We will present here the results of an improved implementation of the multi-parameter fitting that allow an accurate orbit computation of meteors with CABERNET. The comparison of different velocities computation seems to show that if the MPF is by far the best method to solve the trajectory and the velocity of a meteor, the ill-conditioning of the costs functions used can lead to large estimate errors for noisy

  8. Hindi Numerals.

    ERIC Educational Resources Information Center

    Bright, William

    In most languages encountered by linguists, the numerals, considered as a paradigmatic set, constitute a morpho-syntactic problem of only moderate complexity. The Indo-Aryan language family of North India, however, presents a curious contrast. The relatively regular numeral system of Sanskrit, as it has developed historically into the modern…

  9. Progress in fast, accurate multi-scale climate simulations

    DOE PAGES

    Collins, W. D.; Johansen, H.; Evans, K. J.; ...

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  10. Progress in fast, accurate multi-scale climate simulations

    SciTech Connect

    Collins, W. D.; Johansen, H.; Evans, K. J.; Woodward, C. S.; Caldwell, P. M.

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.

  11. Progress in Fast, Accurate Multi-scale Climate Simulations

    SciTech Connect

    Collins, William D; Johansen, Hans; Evans, Katherine J; Woodward, Carol S.; Caldwell, Peter

    2015-01-01

    We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.

  12. Studies regarding the quality of numerical weather forecasts of the WRF model integrated at high-resolutions for the Romanian territory

    DOE PAGES

    Iriza, Amalia; Dumitrache, Rodica C.; Lupascu, Aurelia; ...

    2016-01-01

    Our paper aims to evaluate the quality of high-resolution weather forecasts from the Weather Research and Forecasting (WRF) numerical weather prediction model. The lateral and boundary conditions were obtained from the numerical output of the Consortium for Small-scale Modeling (COSMO) model at 7 km horizontal resolution. Furthermore, the WRF model was run for January and July 2013 at two horizontal resolutions (3 and 1 km). The numerical forecasts of the WRF model were evaluated using different statistical scores for 2 m temperature and 10 m wind speed. Our results showed a tendency of the WRF model to overestimate the valuesmore » of the analyzed parameters in comparison to observations.« less

  13. Highly uniform parallel microfabrication using a large numerical aperture system

    NASA Astrophysics Data System (ADS)

    Zhang, Zi-Yu; Zhang, Chen-Chu; Hu, Yan-Lei; Wang, Chao-Wei; Li, Jia-Wen; Su, Ya-Hui; Chu, Jia-Ru; Wu, Dong

    2016-07-01

    In this letter, we report an improved algorithm to produce accurate phase patterns for generating highly uniform diffraction-limited multifocal arrays in a large numerical aperture objective system. It is shown that based on the original diffraction integral, the uniformity of the diffraction-limited focal arrays can be improved from ˜75% to >97%, owing to the critical consideration of the aperture function and apodization effect associated with a large numerical aperture objective. The experimental results, e.g., 3 × 3 arrays of square and triangle, seven microlens arrays with high uniformity, further verify the advantage of the improved algorithm. This algorithm enables the laser parallel processing technology to realize uniform microstructures and functional devices in the microfabrication system with a large numerical aperture objective.

  14. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  15. High order accurate finite difference schemes based on symmetry preservation

    NASA Astrophysics Data System (ADS)

    Ozbenli, Ersin; Vedula, Prakash

    2016-11-01

    A new algorithm for development of high order accurate finite difference schemes for numerical solution of partial differential equations using Lie symmetries is presented. Considering applicable symmetry groups (such as those relevant to space/time translations, Galilean transformation, scaling, rotation and projection) of a partial differential equation, invariant numerical schemes are constructed based on the notions of moving frames and modified equations. Several strategies for construction of invariant numerical schemes with a desired order of accuracy are analyzed. Performance of the proposed algorithm is demonstrated using analysis of one-dimensional partial differential equations, such as linear advection diffusion equations inviscid Burgers equation and viscous Burgers equation, as our test cases. Through numerical simulations based on these examples, the expected improvement in accuracy of invariant numerical schemes (up to fourth order) is demonstrated. Advantages due to implementation and enhanced computational efficiency inherent in our proposed algorithm are presented. Extension of the basic framework to multidimensional partial differential equations is also discussed.

  16. a Numerical Method for Scattering from Acoustically Soft and Hard Thin Bodies in Two Dimensions

    NASA Astrophysics Data System (ADS)

    YANG, S. A.

    2002-03-01

    This paper presents a numerical method for predicting the acoustic scattering from two-dimensional (2-D) thin bodies. Both the Dirichlet and Neumann problems are considered. Applying the thin-body formulation leads to the boundary integral equations involving weakly singular and hypersingular kernels. Completely regularizing these kinds of singular kernels is thus the main concern of this paper. The basic subtraction-addition technique is adopted. The purpose of incorporating a parametric representation of the boundary surface with the integral equations is two-fold. The first is to facilitate the numerical implementation for arbitrarily shaped bodies. The second one is to facilitate the expansion of the unknown function into a series of Chebyshev polynomials. Some of the resultant integrals are evaluated by using the Gauss-Chebyshev integration rules after moving the series coefficients to the outside of the integral sign; others are evaluated exactly, including the modified hypersingular integral. The numerical implementation basically includes only two parts, one for evaluating the ordinary integrals and the other for solving a system of algebraic equations. Thus, the current method is highly efficient and accurate because these two solution procedures are easy and straightforward. Numerical calculations consist of the acoustic scattering by flat and curved plates. Comparisons with analytical solutions for flat plates are made.

  17. A Workshop on the Integration of Numerical and Symbolic Computing Methods Held in Saratoga Springs, New York on July 9-11, 1990

    DTIC Science & Technology

    1991-04-01

    SUMMARY OF COMPLETED PROJECT (for public use) The summary (about 200 words) must be self-contained and intellegible to a scientifically literate reader...dialogue among re- searchers in symbolic methods and numerical computation, and their appli- cations in certain disciplines of artificial intelligence...Lozano-Perez Purdue University Artificial Intelligence Laboratory West Lafayette, IN 47907 Massachusetts Institute of Technology (317) 494-6181 545

  18. Toward Accurate and Quantitative Comparative Metagenomics

    PubMed Central

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  19. Obtaining accurate translations from expressed sequence tags.

    PubMed

    Wasmuth, James; Blaxter, Mark

    2009-01-01

    The genomes of an increasing number of species are being investigated through the generation of expressed sequence tags (ESTs). However, ESTs are prone to sequencing errors and typically define incomplete transcripts, making downstream annotation difficult. Annotation would be greatly improved with robust polypeptide translations. Many current solutions for EST translation require a large number of full-length gene sequences for training purposes, a resource that is not available for the majority of EST projects. As part of our ongoing EST programs investigating these "neglected" genomes, we have developed a polypeptide prediction pipeline, prot4EST. It incorporates freely available software to produce final translations that are more accurate than those derived from any single method. We describe how this integrated approach goes a long way to overcoming the deficit in training data.

  20. Magnetic ranging tool accurately guides replacement well

    SciTech Connect

    Lane, J.B.; Wesson, J.P. )

    1992-12-21

    This paper reports on magnetic ranging surveys and directional drilling technology which accurately guided a replacement well bore to intersect a leaking gas storage well with casing damage. The second well bore was then used to pump cement into the original leaking casing shoe. The repair well bore kicked off from the surface hole, bypassed casing damage in the middle of the well, and intersected the damaged well near the casing shoe. The repair well was subsequently completed in the gas storage zone near the original well bore, salvaging the valuable bottom hole location in the reservoir. This method would prevent the loss of storage gas, and it would prevent a potential underground blowout that could permanently damage the integrity of the storage field.

  1. Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2009-01-01

    Recent advances in numerical relativity have fueled an explosion of progress in understanding the predictions of Einstein's theory of gravity, General Relativity, for the strong field dynamics, the gravitational radiation wave forms, and consequently the state of the remnant produced from the merger of compact binary objects. I will review recent results from the field, focusing on mergers of two black holes.

  2. Linearized Implicit Numerical Method for Burgers' Equation

    NASA Astrophysics Data System (ADS)

    Mukundan, Vijitha; Awasthi, Ashish

    2016-12-01

    In this work, a novel numerical scheme based on method of lines (MOL) is proposed to solve the nonlinear time dependent Burgers' equation. The Burgers' equation is semi discretized in spatial direction by using MOL to yield system of nonlinear ordinary differential equations in time. The resulting system of nonlinear differential equations is integrated by an implicit finite difference method. We have not used Cole-Hopf transformation which gives less accurate solution for very small values of kinematic viscosity. Also, we have not considered nonlinear solvers that are computationally costlier and take more running time.In the proposed scheme nonlinearity is tackled by Taylor series and the use of fully discretized scheme is easy and practical. The proposed method is unconditionally stable in the linear sense. Furthermore, efficiency of the proposed scheme is demonstrated using three test problems.

  3. Numerical modeling and environmental isotope methods in integrated mine-water management: a case study from the Witwatersrand basin, South Africa

    NASA Astrophysics Data System (ADS)

    Mengistu, Haile; Tessema, Abera; Abiye, Tamiru; Demlie, Molla; Lin, Haili

    2015-05-01

    Improved groundwater flow conceptualization was achieved using environmental stable isotope (ESI) and hydrochemical information to complete a numerical groundwater flow model with reasonable certainty. The study aimed to assess the source of excess water at a pumping shaft located near the town of Stilfontein, North West Province, South Africa. The results indicate that the water intercepted at Margaret Shaft comes largely from seepage of a nearby mine tailings dam (Dam 5) and from the upper dolomite aquifer. If pumping at the shaft continues at the current rate and Dam 5 is decommissioned, neighbouring shallow farm boreholes would dry up within approximately 10 years. Stable isotope data of shaft water indicate that up to 50 % of the pumped water from Margaret Shaft is recirculated, mainly from Dam 5. The results are supplemented by tritium data, demonstrating that recent recharge is taking place through open fractures as well as man-made underground workings, whereas hydrochemical data of fissure water samples from roughly 950 m below ground level exhibit mine-water signatures. Pumping at the shaft, which captures shallow groundwater as well as seepage from surface dams, is a highly recommended option for preventing flooding of downstream mines. The results of this research highlight the importance of additional methods (ESI and hydrochemical analyses) to improve flow conceptualization and numerical modelling.

  4. Fast and accurate implementation of Fourier spectral approximations of nonlocal diffusion operators and its applications

    NASA Astrophysics Data System (ADS)

    Du, Qiang; Yang, Jiang

    2017-03-01

    This work is concerned with the Fourier spectral approximation of various integral differential equations associated with some linear nonlocal diffusion and peridynamic operators under periodic boundary conditions. For radially symmetric kernels, the nonlocal operators under consideration are diagonalizable in the Fourier space so that the main computational challenge is on the accurate and fast evaluation of their eigenvalues or Fourier symbols consisting of possibly singular and highly oscillatory integrals. For a large class of fractional power-like kernels, we propose a new approach based on reformulating the Fourier symbols both as coefficients of a series expansion and solutions of some simple ODE models. We then propose a hybrid algorithm that utilizes both truncated series expansions and high order Runge-Kutta ODE solvers to provide fast evaluation of Fourier symbols in both one and higher dimensional spaces. It is shown that this hybrid algorithm is robust, efficient and accurate. As applications, we combine this hybrid spectral discretization in the spatial variables and the fourth-order exponential time differencing Runge-Kutta for temporal discretization to offer high order approximations of some nonlocal gradient dynamics including nonlocal Allen-Cahn equations, nonlocal Cahn-Hilliard equations, and nonlocal phase-field crystal models. Numerical results show the accuracy and effectiveness of the fully discrete scheme and illustrate some interesting phenomena associated with the nonlocal models.

  5. Accurate and efficient computation of nonlocal potentials based on Gaussian-sum approximation

    NASA Astrophysics Data System (ADS)

    Exl, Lukas; Mauser, Norbert J.; Zhang, Yong

    2016-12-01

    We introduce an accurate and efficient method for the numerical evaluation of nonlocal potentials, including the 3D/2D Coulomb, 2D Poisson and 3D dipole-dipole potentials. Our method is based on a Gaussian-sum approximation of the singular convolution kernel combined with a Taylor expansion of the density. Starting from the convolution formulation of the nonlocal potential, for smooth and fast decaying densities, we make a full use of the Fourier pseudospectral (plane wave) approximation of the density and a separable Gaussian-sum approximation of the kernel in an interval where the singularity (the origin) is excluded. The potential is separated into a regular integral and a near-field singular correction integral. The first is computed with the Fourier pseudospectral method, while the latter is well resolved utilizing a low-order Taylor expansion of the density. Both parts are accelerated by fast Fourier transforms (FFT). The method is accurate (14-16 digits), efficient (O (Nlog ⁡ N) complexity), low in storage, easily adaptable to other different kernels, applicable for anisotropic densities and highly parallelizable.

  6. The numerical analysis of a turbulent compressible jet

    NASA Astrophysics Data System (ADS)

    Debonis, James Raymond

    2000-10-01

    A numerical method to simulate high Reynolds number jet flows was formulated and applied to gain a better understanding of the flow physics. Large-eddy simulation was chosen as the most promising approach to model the turbulent structures due to its compromise between accuracy and computational expense. The filtered Navier-Stokes equations were developed including a total energy form of the energy equation. Sub-grid scale models for the momentum and energy equations were adapted from compressible forms of Smagorinsky's original model. The effect of using disparate temporal and spatial accuracy in a numerical scheme was discovered through one-dimensional model problems and a new uniformly fourth-order accurate numerical method was developed. Results from two and three dimensional validation exercises show that the code accurately reproduces both viscous and inviscid flows. Numerous axisymmetric jet simulations were performed to investigate the effect of grid resolution, numerical scheme, exit boundary conditions and sub-grid scale modeling on the solution and the results were used to guide the three-dimensional calculations. Three-dimensional calculations of a Mach 1.4 jet showed that this LES simulation accurately captures the physics of the turbulent flow. The agreement with experimental data relatively good and is much better than results in the current literature. Turbulent intensities indicate that the turbulent structures at this level of modeling are not isotropic and this information could lend itself to the development of improved sub-grid scale models for LES and turbulence models for RANS simulations. A two point correlation technique was used to quantify the turbulent structures. Two point space correlations were used to obtain a measure of the integral length scale, which proved to be approximately ½Dj. Two point space-time correlations were used to obtain the convection velocity for the turbulent structures. This velocity ranged from 0.57 to 0.71 Uj.

  7. Accurate modelling of unsteady flows in collapsible tubes.

    PubMed

    Marchandise, Emilie; Flaud, Patrice

    2010-01-01

    The context of this paper is the development of a general and efficient numerical haemodynamic tool to help clinicians and researchers in understanding of physiological flow phenomena. We propose an accurate one-dimensional Runge-Kutta discontinuous Galerkin (RK-DG) method coupled with lumped parameter models for the boundary conditions. The suggested model has already been successfully applied to haemodynamics in arteries and is now extended for the flow in collapsible tubes such as veins. The main difference with cardiovascular simulations is that the flow may become supercritical and elastic jumps may appear with the numerical consequence that scheme may not remain monotone if no limiting procedure is introduced. We show that our second-order RK-DG method equipped with an approximate Roe's Riemann solver and a slope-limiting procedure allows us to capture elastic jumps accurately. Moreover, this paper demonstrates that the complex physics associated with such flows is more accurately modelled than with traditional methods such as finite difference methods or finite volumes. We present various benchmark problems that show the flexibility and applicability of the numerical method. Our solutions are compared with analytical solutions when they are available and with solutions obtained using other numerical methods. Finally, to illustrate the clinical interest, we study the emptying process in a calf vein squeezed by contracting skeletal muscle in a normal and pathological subject. We compare our results with experimental simulations and discuss the sensitivity to parameters of our model.

  8. Accurately measuring dynamic coefficient of friction in ultraform finishing

    NASA Astrophysics Data System (ADS)

    Briggs, Dennis; Echaves, Samantha; Pidgeon, Brendan; Travis, Nathan; Ellis, Jonathan D.

    2013-09-01

    UltraForm Finishing (UFF) is a deterministic sub-aperture computer numerically controlled grinding and polishing platform designed by OptiPro Systems. UFF is used to grind and polish a variety of optics from simple spherical to fully freeform, and numerous materials from glasses to optical ceramics. The UFF system consists of an abrasive belt around a compliant wheel that rotates and contacts the part to remove material. This work aims to accurately measure the dynamic coefficient of friction (μ), how it changes as a function of belt wear, and how this ultimately affects material removal rates. The coefficient of friction has been examined in terms of contact mechanics and Preston's equation to determine accurate material removal rates. By accurately predicting changes in μ, polishing iterations can be more accurately predicted, reducing the total number of iterations required to meet specifications. We have established an experimental apparatus that can accurately measure μ by measuring triaxial forces during translating loading conditions or while manufacturing the removal spots used to calculate material removal rates. Using this system, we will demonstrate μ measurements for UFF belts during different states of their lifecycle and assess the material removal function from spot diagrams as a function of wear. Ultimately, we will use this system for qualifying belt-wheel-material combinations to develop a spot-morphing model to better predict instantaneous material removal functions.

  9. Accurate and rapid micromixer for integrated microfluidic devices

    DOEpatents

    Van Dam, R. Michael; Liu, Kan; Shen, Kwang -Fu Clifton; Tseng, Hsian -Rong

    2015-09-22

    The invention may provide a microfluidic mixer having a droplet generator and a droplet mixer in selective fluid connection with the droplet generator. The droplet generator comprises first and second fluid chambers that are structured to be filled with respective first and second fluids that can each be held in isolation for a selectable period of time. The first and second fluid chambers are further structured to be reconfigured into a single combined chamber to allow the first and second fluids in the first and second fluid chambers to come into fluid contact with each other in the combined chamber for a selectable period of time prior to being brought into the droplet mixer.

  10. Numerical Optimization

    DTIC Science & Technology

    1992-12-01

    fisica matematica . ABSTRACT - We consider a new method for the numerical solution both of non- linear systems of equations and of cornplementauity... Matematica , Serie V11 Volume 9 , Roma (1989), 521-543 An Inexact Continuous Method for the Solution of Large Systems of Equations and Complementarity...34 - 00185 Roma - Italy APPENDIX 2 A Quadratically Convergent Method for Unear Programming’ Stefano Herzel Dipartimento di Matematica -G. Castelnuovo

  11. Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis

    PubMed Central

    Abbasi, Mahdi

    2014-01-01

    Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N2log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR. PMID:24696808

  12. An integrated strategy for rapid and accurate determination of free and cell-bound microcystins and related peptides in natural blooms by liquid chromatography-electrospray-high resolution mass spectrometry and matrix-assisted laser desorption/ionization time-of-flight/time-of-flight mass spectrometry using both positive and negative ionization modes.

    PubMed

    Flores, Cintia; Caixach, Josep

    2015-08-14

    An integrated high resolution mass spectrometry (HRMS) strategy has been developed for rapid and accurate determination of free and cell-bound microcystins (MCs) and related peptides in water blooms. The natural samples (water and algae) were filtered for independent analysis of aqueous and sestonic fractions. These fractions were analyzed by MALDI-TOF/TOF-MS and ESI-Orbitrap-HCD-MS. MALDI, ESI and the study of fragmentation sequences have been provided crucial structural information. The potential of combined positive and negative ionization modes, full scan and fragmentation acquisition modes (TOF/TOF and HCD) by HRMS and high resolution and accurate mass was investigated in order to allow unequivocal determination of MCs. Besides, a reliable quantitation has been possible by HRMS. This composition helped to decrease the probability of false positives and negatives, as alternative to commonly used LC-ESI-MS/MS methods. The analysis was non-target, therefore covered the possibility to analyze all MC analogs concurrently without any pre-selection of target MC. Furthermore, archived data was subjected to retrospective "post-targeted" analysis and a screening of other potential toxins and related peptides as anabaenopeptins in the samples was done. Finally, the MS protocol and identification tools suggested were applied to the analysis of characteristic water blooms from Spanish reservoirs.

  13. Tool for the Integrated Dynamic Numerical Propulsion System Simulation (NPSS)/Turbine Engine Closed-Loop Transient Analysis (TTECTrA) User's Guide

    NASA Technical Reports Server (NTRS)

    Chin, Jeffrey C.; Csank, Jeffrey T.

    2016-01-01

    The Tool for Turbine Engine Closed-Loop Transient Analysis (TTECTrA ver2) is a control design tool thatenables preliminary estimation of transient performance for models without requiring a full nonlinear controller to bedesigned. The program is compatible with subsonic engine models implemented in the MATLAB/Simulink (TheMathworks, Inc.) environment and Numerical Propulsion System Simulation (NPSS) framework. At a specified flightcondition, TTECTrA will design a closed-loop controller meeting user-defined requirements in a semi or fully automatedfashion. Multiple specifications may be provided, in which case TTECTrA will design one controller for each, producing acollection of controllers in a single run. Each resulting controller contains a setpoint map, a schedule of setpointcontroller gains, and limiters; all contributing to transient characteristics. The goal of the program is to providesteady-state engine designers with more immediate feedback on the transient engine performance earlier in the design cycle.

  14. Accurate ab Initio Spin Densities.

    PubMed

    Boguslawski, Katharina; Marti, Konrad H; Legeza, Ors; Reiher, Markus

    2012-06-12

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740].

  15. Accurate free and forced rotational motions of rigid Venus

    NASA Astrophysics Data System (ADS)

    Cottereau, L.; Souchay, J.; Aljbaae, S.

    2010-06-01

    Context. The precise and accurate modelling of a terrestrial planet like Venus is an exciting and challenging topic, all the more interesting because it can be compared with that of Earth for which such a modelling has already been achieved at the milli-arcsecond level. Aims: We aim to complete a previous study, by determining the polhody at the milli-arcsecond level, i.e. the torque-free motion of the angular momentum axis of a rigid Venus in a body-fixed frame, as well as the nutation of its third axis of figure in space, which is fundamental from an observational point of view. Methods: We use the same theoretical framework as Kinoshita (1977, Celest. Mech., 15, 277) did to determine the precession-nutation motion of a rigid Earth. It is based on a representation of the rotation of a rigid Venus, with the help of Andoyer variables and a set of canonical equations in Hamiltonian formalism. Results: In a first part we computed the polhody, we showed that this motion is highly elliptical, with a very long period of 525 cy compared with 430 d for the Earth. This is due to the very small dynamical flattening of Venus in comparison with our planet. In a second part we precisely computed the Oppolzer terms, which allow us to represent the motion in space of the third Venus figure axis with respect to the Venus angular momentum axis under the influence of the solar gravitational torque. We determined the corresponding tables of the nutation coefficients of the third figure axis both in longitude and in obliquity due to the Sun, which are of the same order of amplitude as for the Earth. We showed that the nutation coefficients for the third figure axis are significantly different from those of the angular momentum axis on the contrary of the Earth. Our analytical results have been validated by a numerical integration, which revealed the indirect planetary effects.

  16. Fast and spectrally accurate Ewald summation for 2-periodic electrostatic systems

    NASA Astrophysics Data System (ADS)

    Lindbo, Dag; Tornberg, Anna-Karin

    2012-04-01

    A new method for Ewald summation in planar/slablike geometry, i.e., systems where periodicity applies in two dimensions and the last dimension is "free" (2P), is presented. We employ a spectral representation in terms of both Fourier series and integrals. This allows us to concisely derive both the 2P Ewald sum and a fast particle mesh Ewald (PME)-type method suitable for large-scale computations. The primary results are: (i) close and illuminating connections between the 2P problem and the standard Ewald sum and associated fast methods for full periodicity; (ii) a fast, O(N log N), and spectrally accurate PME-type method for the 2P k-space Ewald sum that uses vastly less memory than traditional PME methods; (iii) errors that decouple, such that parameter selection is simplified. We give analytical and numerical results to support this.

  17. Accurate adjoint design sensitivities for nano metal optics.

    PubMed

    Hansen, Paul; Hesselink, Lambertus

    2015-09-07

    We present a method for obtaining accurate numerical design sensitivities for metal-optical nanostructures. Adjoint design sensitivity analysis, long used in fluid mechanics and mechanical engineering for both optimization and structural analysis, is beginning to be used for nano-optics design, but it fails for sharp-cornered metal structures because the numerical error in electromagnetic simulations of metal structures is highest at sharp corners. These locations feature strong field enhancement and contribute strongly to design sensitivities. By using high-accuracy FEM calculations and rounding sharp features to a finite radius of curvature we obtain highly-accurate design sensitivities for 3D metal devices. To provide a bridge to the existing literature on adjoint methods in other fields, we derive the sensitivity equations for Maxwell's equations in the PDE framework widely used in fluid mechanics.

  18. Improving the Non-Hydrostatic Numerical Dust Model by Integrating Soil Moisture and Greenness Vegetation Fraction Data with Different Spatiotemporal Resolutions

    PubMed Central

    Yu, Manzhu; Yang, Chaowei

    2016-01-01

    Dust storms are devastating natural disasters that cost billions of dollars and many human lives every year. Using the Non-Hydrostatic Mesoscale Dust Model (NMM-dust), this research studies how different spatiotemporal resolutions of two input parameters (soil moisture and greenness vegetation fraction) impact the sensitivity and accuracy of a dust model. Experiments are conducted by simulating dust concentration during July 1–7, 2014, for the target area covering part of Arizona and California (31, 37, -118, -112), with a resolution of ~ 3 km. Using ground-based and satellite observations, this research validates the temporal evolution and spatial distribution of dust storm output from the NMM-dust, and quantifies model error using measurements of four evaluation metrics (mean bias error, root mean square error, correlation coefficient and fractional gross error). Results showed that the default configuration of NMM-dust (with a low spatiotemporal resolution of both input parameters) generates an overestimation of Aerosol Optical Depth (AOD). Although it is able to qualitatively reproduce the temporal trend of the dust event, the default configuration of NMM-dust cannot fully capture its actual spatial distribution. Adjusting the spatiotemporal resolution of soil moisture and vegetation cover datasets showed that the model is sensitive to both parameters. Increasing the spatiotemporal resolution of soil moisture effectively reduces model’s overestimation of AOD, while increasing the spatiotemporal resolution of vegetation cover changes the spatial distribution of reproduced dust storm. The adjustment of both parameters enables NMM-dust to capture the spatial distribution of dust storms, as well as reproducing more accurate dust concentration. PMID:27936136

  19. Improving the Non-Hydrostatic Numerical Dust Model by Integrating Soil Moisture and Greenness Vegetation Fraction Data with Different Spatiotemporal Resolutions.

    PubMed

    Yu, Manzhu; Yang, Chaowei

    2016-01-01

    Dust storms are devastating natural disasters that cost billions of dollars and many human lives every year. Using the Non-Hydrostatic Mesoscale Dust Model (NMM-dust), this research studies how different spatiotemporal resolutions of two input parameters (soil moisture and greenness vegetation fraction) impact the sensitivity and accuracy of a dust model. Experiments are conducted by simulating dust concentration during July 1-7, 2014, for the target area covering part of Arizona and California (31, 37, -118, -112), with a resolution of ~ 3 km. Using ground-based and satellite observations, this research validates the temporal evolution and spatial distribution of dust storm output from the NMM-dust, and quantifies model error using measurements of four evaluation metrics (mean bias error, root mean square error, correlation coefficient and fractional gross error). Results showed that the default configuration of NMM-dust (with a low spatiotemporal resolution of both input parameters) generates an overestimation of Aerosol Optical Depth (AOD). Although it is able to qualitatively reproduce the temporal trend of the dust event, the default configuration of NMM-dust cannot fully capture its actual spatial distribution. Adjusting the spatiotemporal resolution of soil moisture and vegetation cover datasets showed that the model is sensitive to both parameters. Increasing the spatiotemporal resolution of soil moisture effectively reduces model's overestimation of AOD, while increasing the spatiotemporal resolution of vegetation cover changes the spatial distribution of reproduced dust storm. The adjustment of both parameters enables NMM-dust to capture the spatial distribution of dust storms, as well as reproducing more accurate dust concentration.

  20. Accurate Scientific Visualization in Research and Physics Teaching

    NASA Astrophysics Data System (ADS)

    Wendler, Tim

    2011-10-01

    Accurate visualization is key in the expression and comprehension of physical principles. Many 3D animation software packages come with built-in numerical methods for a variety of fundamental classical systems. Scripting languages give access to low-level computational functionality, thereby revealing a virtual physics laboratory for teaching and research. Specific examples will be presented: Galilean relativistic hair, energy conservation in complex systems, scattering from a central force, and energy transfer in bi-molecular reactions.

  1. AN INTEGRAL EQUATION REPRESENTATION OF WIDE-BAND ELECTROMAGNETIC SCATTERING BY THIN SHEETS

    EPA Science Inventory

    An efficient, accurate numerical modeling scheme has been developed, based on the integral equation solution to compute electromagnetic (EM) responses of thin sheets over a wide frequency band. The thin-sheet approach is useful for simulating the EM response of a fracture system ...

  2. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  3. Accurate torque-speed performance prediction for brushless dc motors

    NASA Astrophysics Data System (ADS)

    Gipper, Patrick D.

    Desirable characteristics of the brushless dc motor (BLDCM) have resulted in their application for electrohydrostatic (EH) and electromechanical (EM) actuation systems. But to effectively apply the BLDCM requires accurate prediction of performance. The minimum necessary performance characteristics are motor torque versus speed, peak and average supply current and efficiency. BLDCM nonlinear simulation software specifically adapted for torque-speed prediction is presented. The capability of the software to quickly and accurately predict performance has been verified on fractional to integral HP motor sizes, and is presented. Additionally, the capability of torque-speed prediction with commutation angle advance is demonstrated.

  4. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  5. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  6. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  7. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  8. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  9. Dynamical Approach Study of Spurious Numerics in Nonlinear Computations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Mansour, Nagi (Technical Monitor)

    2002-01-01

    The last two decades have been an era when computation is ahead of analysis and when very large scale practical computations are increasingly used in poorly understood multiscale complex nonlinear physical problems and non-traditional fields. Ensuring a higher level of confidence in the predictability and reliability (PAR) of these numerical simulations could play a major role in furthering the design, understanding, affordability and safety of our next generation air and space transportation systems, and systems for planetary and atmospheric sciences, and in understanding the evolution and origin of life. The need to guarantee PAR becomes acute when computations offer the ONLY way of solving these types of data limited problems. Employing theory from nonlinear dynamical systems, some building blocks to ensure a higher level of confidence in PAR of numerical simulations have been revealed by the author and world expert collaborators in relevant fields. Five building blocks with supporting numerical examples were discussed. The next step is to utilize knowledge gained by including nonlinear dynamics, bifurcation and chaos theories as an integral part of the numerical process. The third step is to design integrated criteria for reliable and accurate algorithms that cater to the different multiscale nonlinear physics. This includes but is not limited to the construction of appropriate adaptive spatial and temporal discretizations that are suitable for the underlying governing equations. In addition, a multiresolution wavelets approach for adaptive numerical dissipation/filter controls for high speed turbulence, acoustics and combustion simulations will be sought. These steps are corner stones for guarding against spurious numerical solutions that are solutions of the discretized counterparts but are not solutions of the underlying governing equations.

  10. On the accurate simulation of tsunami wave propagation

    NASA Astrophysics Data System (ADS)

    Castro, C. E.; Käser, M.; Toro, E. F.

    2009-04-01

    A very important part of any tsunami early warning system is the numerical simulation of the wave propagation in the open sea and close to geometrically complex coastlines respecting bathymetric variations. Here we are interested in improving the numerical tools available to accurately simulate tsunami wave propagation on a Mediterranean basin scale. To this end, we need to accomplish some targets, such as: high-order numerical simulation in space and time, preserve steady state conditions to avoid spurious oscillations and describe complex geometries due to bathymetry and coastlines. We use the Arbitrary accuracy DERivatives Riemann problem method together with Finite Volume method (ADER-FV) over non-structured triangular meshes. The novelty of this method is the improvement of the ADER-FV scheme, introducing the well-balanced property when geometrical sources are considered for unstructured meshes and arbitrary high-order accuracy. In a previous work from Castro and Toro [1], the authors mention that ADER-FV schemes approach asymptotically the well-balanced condition, which was true for the test case mentioned in [1]. However, new evidence[2] shows that for real scale problems as the Mediterranean basin, and considering realistic bathymetry as ETOPO-2[3], this asymptotic behavior is not enough. Under these realistic conditions the standard ADER-FV scheme fails to accurately describe the propagation of gravity waves without being contaminated with spurious oscillations, also known as numerical waves. The main problem here is that at discrete level, i.e. from a numerical point of view, the numerical scheme does not correctly balance the influence of the fluxes and the sources. Numerical schemes that retain this balance are said to satisfy the well-balanced property or the exact C-property. This unbalance reduces, as we refine the spatial discretization or increase the order of the numerical method. However, the computational cost increases considerably this way

  11. A stable high-order perturbation of surfaces method for numerical simulation of diffraction problems in triply layered media

    NASA Astrophysics Data System (ADS)

    Hong, Youngjoon; Nicholls, David P.

    2017-02-01

    The accurate numerical simulation of linear waves interacting with periodic layered media is a crucial capability in engineering applications. In this contribution we study the stable and high-order accurate numerical simulation of the interaction of linear, time-harmonic waves with a periodic, triply layered medium with irregular interfaces. In contrast with volumetric approaches, High-Order Perturbation of Surfaces (HOPS) algorithms are inexpensive interfacial methods which rapidly and recursively estimate scattering returns by perturbation of the interface shape. In comparison with Boundary Integral/Element Methods, the stable HOPS algorithm we describe here does not require specialized quadrature rules, periodization strategies, or the solution of dense non-symmetric positive definite linear systems. In addition, the algorithm is provably stable as opposed to other classical HOPS approaches. With numerical experiments we show the remarkable efficiency, fidelity, and accuracy one can achieve with an implementation of this algorithm.

  12. Numerical solution methods for viscoelastic orthotropic materials

    NASA Technical Reports Server (NTRS)

    Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.

    1988-01-01

    Numerical solution methods for viscoelastic orthotropic materials, specifically fiber reinforced composite materials, are examined. The methods include classical lamination theory using time increments, direction solution of the Volterra Integral, Zienkiewicz's linear Prony series method, and a new method called Nonlinear Differential Equation Method (NDEM) which uses a nonlinear Prony series. The criteria used for comparison of the various methods include the stability of the solution technique, time step size stability, computer solution time length, and computer memory storage. The Volterra Integral allowed the implementation of higher order solution techniques but had difficulties solving singular and weakly singular compliance function. The Zienkiewicz solution technique, which requires the viscoelastic response to be modeled by a Prony series, works well for linear viscoelastic isotropic materials and small time steps. The new method, NDEM, uses a modified Prony series which allows nonlinear stress effects to be included and can be used with orthotropic nonlinear viscoelastic materials. The NDEM technique is shown to be accurate and stable for both linear and nonlinear conditions with minimal computer time.

  13. Numerical quadratures for approximate computation of ERBS

    NASA Astrophysics Data System (ADS)

    Zanaty, Peter

    2013-12-01

    In the ground-laying paper [3] on expo-rational B-splines (ERBS), the default numerical method for approximate computation of the integral with C∞-smooth integrand in the definition of ERBS is Romberg integration. In the present work, a variety of alternative numerical quadrature methods for computation of ERBS and other integrals with smooth integrands are studied, and their performance is compared on several benchmark examples.

  14. Accurate pressure gradient calculations in hydrostatic atmospheric models

    NASA Technical Reports Server (NTRS)

    Carroll, John J.; Mendez-Nunez, Luis R.; Tanrikulu, Saffet

    1987-01-01

    A method for the accurate calculation of the horizontal pressure gradient acceleration in hydrostatic atmospheric models is presented which is especially useful in situations where the isothermal surfaces are not parallel to the vertical coordinate surfaces. The present method is shown to be exact if the potential temperature lapse rate is constant between the vertical pressure integration limits. The technique is applied to both the integration of the hydrostatic equation and the computation of the slope correction term in the horizontal pressure gradient. A fixed vertical grid and a dynamic grid defined by the significant levels in the vertical temperature distribution are employed.

  15. Painfree and accurate Bayesian estimation of psychometric functions for (potentially) overdispersed data.

    PubMed

    Schütt, Heiko H; Harmeling, Stefan; Macke, Jakob H; Wichmann, Felix A

    2016-05-01

    The psychometric function describes how an experimental variable, such as stimulus strength, influences the behaviour of an observer. Estimation of psychometric functions from experimental data plays a central role in fields such as psychophysics, experimental psychology and in the behavioural neurosciences. Experimental data may exhibit substantial overdispersion, which may result from non-stationarity in the behaviour of observers. Here we extend the standard binomial model which is typically used for psychometric function estimation to a beta-binomial model. We show that the use of the beta-binomial model makes it possible to determine accurate credible intervals even in data which exhibit substantial overdispersion. This goes beyond classical measures for overdispersion-goodness-of-fit-which can detect overdispersion but provide no method to do correct inference for overdispersed data. We use Bayesian inference methods for estimating the posterior distribution of the parameters of the psychometric function. Unlike previous Bayesian psychometric inference methods our software implementation-psignifit 4-performs numerical integration of the posterior within automatically determined bounds. This avoids the use of Markov chain Monte Carlo (MCMC) methods typically requiring expert knowledge. Extensive numerical tests show the validity of the approach and we discuss implications of overdispersion for experimental design. A comprehensive MATLAB toolbox implementing the method is freely available; a python implementation providing the basic capabilities is also available.

  16. A new class of accurate, mesh-free hydrodynamic simulation methods

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2015-06-01

    We present two new Lagrangian methods for hydrodynamics, in a systematic comparison with moving-mesh, smoothed particle hydrodynamics (SPH), and stationary (non-moving) grid methods. The new methods are designed to simultaneously capture advantages of both SPH and grid-based/adaptive mesh refinement (AMR) schemes. They are based on a kernel discretization of the volume coupled to a high-order matrix gradient estimator and a Riemann solver acting over the volume `overlap'. We implement and test a parallel, second-order version of the method with self-gravity and cosmological integration, in the code GIZMO:1 this maintains exact mass, energy and momentum conservation; exhibits superior angular momentum conservation compared to all other methods we study; does not require `artificial diffusion' terms; and allows the fluid elements to move with the flow, so resolution is automatically adaptive. We consider a large suite of test problems, and find that on all problems the new methods appear competitive with moving-mesh schemes, with some advantages (particularly in angular momentum conservation), at the cost of enhanced noise. The new methods have many advantages versus SPH: proper convergence, good capturing of fluid-mixing instabilities, dramatically reduced `particle noise' and numerical viscosity, more accurate sub-sonic flow evolution, and sharp shock-capturing. Advantages versus non-moving meshes include: automatic adaptivity, dramatically reduced advection errors and numerical overmixing, velocity-independent errors, accurate coupling to gravity, good angular momentum conservation and elimination of `grid alignment' effects. We can, for example, follow hundreds of orbits of gaseous discs, while AMR and SPH methods break down in a few orbits. However, fixed meshes minimize `grid noise'. These differences are important for a range of astrophysical problems.

  17. Feedback Integrators

    NASA Astrophysics Data System (ADS)

    Chang, Dong Eui; Jiménez, Fernando; Perlmutter, Matthew

    2016-12-01

    A new method is proposed to numerically integrate a dynamical system on a manifold such that the trajectory stably remains on the manifold and preserves the first integrals of the system. The idea is that given an initial point in the manifold we extend the dynamics from the manifold to its ambient Euclidean space and then modify the dynamics outside the intersection of the manifold and the level sets of the first integrals containing the initial point such that the intersection becomes a unique local attractor of the resultant dynamics. While the modified dynamics theoretically produces the same trajectory as the original dynamics, it yields a numerical trajectory that stably remains on the manifold and preserves the first integrals. The big merit of our method is that the modified dynamics can be integrated with any ordinary numerical integrator such as Euler or Runge-Kutta. We illustrate this method by applying it to three famous problems: the free rigid body, the Kepler problem and a perturbed Kepler problem with rotational symmetry. We also carry out simulation studies to demonstrate the excellence of our method and make comparisons with the standard projection method, a splitting method and Störmer-Verlet schemes.

  18. Accurate computation of Zernike moments in polar coordinates.

    PubMed

    Xin, Yongqing; Pawlak, Miroslaw; Liao, Simon

    2007-02-01

    An algorithm for high-precision numerical computation of Zernike moments is presented. The algorithm, based on the introduced polar pixel tiling scheme, does not exhibit the geometric error and numerical integration error which are inherent in conventional methods based on Cartesian coordinates. This yields a dramatic improvement of the Zernike moments accuracy in terms of their reconstruction and invariance properties. The introduced image tiling requires an interpolation algorithm which turns out to be of the second order importance compared to the discretization error. Various comparisons are made between the accuracy of the proposed method and that of commonly used techniques. The results reveal the great advantage of our approach.

  19. Accurate calculation and instability of supersonic wake flows

    NASA Technical Reports Server (NTRS)

    Papageorgiou, Demetrius T.

    1990-01-01

    This study is concerned with the computation and linear stability of a class of laminar compressible wake flows. The emphasis is on correct basic flow profiles that satisfy the steady equations of motion, and to this end the unperturbed state is obtained through numerical integration of the compressible boundary-layer equations. The linear stability of the flow is examined via the Rayleigh equation that describes evolution of inviscid disturbances. Analytical results are given for short- and long-wavelength disturbances and some numerical results of the general eigenvalue problem are also reported.

  20. Hydroforming Of Patchwork Blanks — Numerical Modeling And Experimental Validation

    NASA Astrophysics Data System (ADS)

    Lamprecht, Klaus; Merklein, Marion; Geiger, Manfred

    2005-08-01

    In comparison to the commonly applied technology of tailored blanks the concept of patchwork blanks offers a number of additional advantages. Potential application areas for patchwork blanks in automotive industry are e.g. local reinforcements of automotive closures, structural reinforcements of rails and pillars as well as shock towers. But even if there is a significant application potential for patchwork blanks in automobile production, industrial realization of this innovative technique is decelerated due to a lack of knowledge regarding the forming behavior and the numerical modeling of patchwork blanks. Especially for the numerical simulation of hydroforming processes, where one part of the forming tool is replaced by a fluid under pressure, advanced modeling techniques are required to ensure an accurate prediction of the blanks' forming behavior. The objective of this contribution is to provide an appropriate model for the numerical simulation of patchwork blanks' forming processes. Therefore, different finite element modeling techniques for patchwork blanks are presented. In addition to basic shell element models a combined finite element model consisting of shell and solid elements is defined. Special emphasis is placed on the modeling of the weld seam. For this purpose the local mechanical properties of the weld metal, which have been determined by means of Martens-hardness measurements and uniaxial tensile tests, are integrated in the finite element models. The results obtained from the numerical simulations are compared to experimental data from a hydraulic bulge test. In this context the focus is laid on laser- and spot-welded patchwork blanks.

  1. Some self starting integrators for x Prime equals f (x, t). [Runge-Kutta method and orbital position estimation

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1974-01-01

    The integration is discussed of the vector differential equation X = F(x, t) from time t sub i to t sub (i = 1) where only the values of x sub i are available for the the integration. No previous values of x or x prime are used. Using an orbit integration problem, comparisons are made between Taylor series integrators and various types and orders of Runge-Kutta integrators. A fourth order Runge-Kutta type integrator for orbital work is presented, and approximate (there may be no exact) fifth order Runge-Kutta integrators are discussed. Also discussed and compared is a self starting integrator ising delta f/delta x. A numerical method for controlling the accuracy of integration is given, and the special equations for accurately integrating accelerometer data are shown.

  2. Electric field calculations in brain stimulation based on finite elements: an optimized processing pipeline for the generation and usage of accurate individual head models.

    PubMed

    Windhoff, Mirko; Opitz, Alexander; Thielscher, Axel

    2013-04-01

    The need for realistic electric field calculations in human noninvasive brain stimulation is undisputed to more accurately determine the affected brain areas. However, using numerical techniques such as the finite element method (FEM) is methodologically complex, starting with the creation of accurate head models to the integration of the models in the numerical calculations. These problems substantially limit a more widespread application of numerical methods in brain stimulation up to now. We introduce an optimized processing pipeline allowing for the automatic generation of individualized high-quality head models from magnetic resonance images and their usage in subsequent field calculations based on the FEM. The pipeline starts by extracting the borders between skin, skull, cerebrospinal fluid, gray and white matter. The quality of the resulting surfaces is subsequently improved, allowing for the creation of tetrahedral volume head meshes that can finally be used in the numerical calculations. The pipeline integrates and extends established (and mainly free) software for neuroimaging, computer graphics, and FEM calculations into one easy-to-use solution. We demonstrate the successful usage of the pipeline in six subjects, including field calculations for transcranial magnetic stimulation and transcranial direct current stimulation. The quality of the head volume meshes is validated both in terms of capturing the underlying anatomy and of the well-shapedness of the mesh elements. The latter is crucial to guarantee the numerical robustness of the FEM calculations. The pipeline will be released as open-source, allowing for the first time to perform realistic field calculations at an acceptable methodological complexity and moderate costs.

  3. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.

    PubMed

    Fuchs, Franz G; Hjelmervik, Jon M

    2016-02-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results.

  4. Unconditionally stable, second-order accurate schemes for solid state phase transformations driven by mechano-chemical spinodal decomposition

    SciTech Connect

    Sagiyama, Koki; Rudraraju, Shiva; Garikipati, Krishna

    2016-09-13

    Here, we consider solid state phase transformations that are caused by free energy densities with domains of non-convexity in strain-composition space; we refer to the non-convex domains as mechano-chemical spinodals. The non-convexity with respect to composition and strain causes segregation into phases with different crystal structures. We work on an existing model that couples the classical Cahn-Hilliard model with Toupin’s theory of gradient elasticity at finite strains. Both systems are represented by fourth-order, nonlinear, partial differential equations. The goal of this work is to develop unconditionally stable, second-order accurate time-integration schemes, motivated by the need to carry out large scale computations of dynamically evolving microstructures in three dimensions. We also introduce reduced formulations naturally derived from these proposed schemes for faster computations that are still second-order accurate. Although our method is developed and analyzed here for a specific class of mechano-chemical problems, one can readily apply the same method to develop unconditionally stable, second-order accurate schemes for any problems for which free energy density functions are multivariate polynomials of solution components and component gradients. Apart from an analysis and construction of methods, we present a suite of numerical results that demonstrate the schemes in action.

  5. A Time-Accurate Upwind Unstructured Finite Volume Method for Compressible Flow with Cure of Pathological Behaviors

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; Jorgenson, Philip C. E.

    2007-01-01

    A time-accurate, upwind, finite volume method for computing compressible flows on unstructured grids is presented. The method is second order accurate in space and time and yields high resolution in the presence of discontinuities. For efficiency, the Roe approximate Riemann solver with an entropy correction is employed. In the basic Euler/Navier-Stokes scheme, many concepts of high order upwind schemes are adopted: the surface flux integrals are carefully treated, a Cauchy-Kowalewski time-stepping scheme is used in the time-marching stage, and a multidimensional limiter is applied in the reconstruction stage. However even with these up-to-date improvements, the basic upwind scheme is still plagued by the so-called "pathological behaviors," e.g., the carbuncle phenomenon, the expansion shock, etc. A solution to these limitations is presented which uses a very simple dissipation model while still preserving second order accuracy. This scheme is referred to as the enhanced time-accurate upwind (ETAU) scheme in this paper. The unstructured grid capability renders flexibility for use in complex geometry; and the present ETAU Euler/Navier-Stokes scheme is capable of handling a broad spectrum of flow regimes from high supersonic to subsonic at very low Mach number, appropriate for both CFD (computational fluid dynamics) and CAA (computational aeroacoustics). Numerous examples are included to demonstrate the robustness of the methods.

  6. Unconditionally stable, second-order accurate schemes for solid state phase transformations driven by mechano-chemical spinodal decomposition

    DOE PAGES

    Sagiyama, Koki; Rudraraju, Shiva; Garikipati, Krishna

    2016-09-13

    Here, we consider solid state phase transformations that are caused by free energy densities with domains of non-convexity in strain-composition space; we refer to the non-convex domains as mechano-chemical spinodals. The non-convexity with respect to composition and strain causes segregation into phases with different crystal structures. We work on an existing model that couples the classical Cahn-Hilliard model with Toupin’s theory of gradient elasticity at finite strains. Both systems are represented by fourth-order, nonlinear, partial differential equations. The goal of this work is to develop unconditionally stable, second-order accurate time-integration schemes, motivated by the need to carry out large scalemore » computations of dynamically evolving microstructures in three dimensions. We also introduce reduced formulations naturally derived from these proposed schemes for faster computations that are still second-order accurate. Although our method is developed and analyzed here for a specific class of mechano-chemical problems, one can readily apply the same method to develop unconditionally stable, second-order accurate schemes for any problems for which free energy density functions are multivariate polynomials of solution components and component gradients. Apart from an analysis and construction of methods, we present a suite of numerical results that demonstrate the schemes in action.« less

  7. Mass conservative, positive definite integrator for atmospheric chemical dynamics

    NASA Astrophysics Data System (ADS)

    Nguyen, Khoi; Caboussat, Alexandre; Dabdub, Donald

    2009-12-01

    Air quality models compute the transformation of species in the atmosphere undergoing chemical and physical changes. The numerical algorithms used to predict these transformations should obey mass conservation and positive definiteness properties. Among all physical phenomena, the chemical kinetics solver provides the greatest challenge to attain these two properties. In general, most chemical kinetics solvers are mass conservative but not positive definite. In this article, a new numerical algorithm for the computation of chemical kinetics is presented. The integrator is called Split Single Reaction Integrator (SSRI). It is both mass conservative and positive definite. It solves each chemical reaction exactly and uses operator splitting techniques (symmetric split) to combine them into the entire system. The method can be used within a host integrator to fix the negative concentrations while preserving the mass, or it can be used as a standalone integrator that guarantees positive definiteness and mass conservation. Numerical results show that the new integrator, used as a standalone integrator, is second order accurate and stable under large fixed time steps when other conventional integrators are unstable.

  8. Implementation of equivalent domain integral method in the two-dimensional analysis of mixed mode problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Shivakumar, K. N.

    1989-01-01

    An equivalent domain integral (EDI) method for calculating J-intergrals for two-dimensional cracked elastic bodies is presented. The details of the method and its implementation are presented for isoparametric elements. The total and product integrals consist of the sum of an area of domain integral and line integrals on the crack faces. The line integrals vanish only when the crack faces are traction free and the loading is either pure mode 1 or pure mode 2 or a combination of both with only the square-root singular term in the stress field. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented. The procedure that uses the symmetric and antisymmetric components of the stress and displacement fields to calculate the individual modes gave accurate values of the integrals for all problems analyzed. The EDI method when applied to a problem of an interface crack in two different materials showed that the mode 1 and mode 2 components are domain dependent while the total integral is not. This behavior is caused by the presence of the oscillatory part of the singularity in bimaterial crack problems. The EDI method, thus, shows behavior similar to the virtual crack closure method for bimaterial problems.

  9. Integrated Urban Dispersion Modeling Capability

    SciTech Connect

    Kosovic, B; Chan, S T

    2003-11-03

    Numerical simulations represent a unique predictive tool for developing a detailed understanding of three-dimensional flow fields and associated concentration distributions from releases in complex urban settings (Britter and Hanna 2003). The accurate and timely prediction of the atmospheric dispersion of hazardous materials in densely populated urban areas is a critical homeland and national security need for emergency preparedness, risk assessment, and vulnerability studies. The main challenges in high-fidelity numerical modeling of urban dispersion are the accurate prediction of peak concentrations, spatial extent and temporal evolution of harmful levels of hazardous materials, and the incorporation of detailed structural geometries. Current computational tools do not include all the necessary elements to accurately represent hazardous release events in complex urban settings embedded in high-resolution terrain. Nor do they possess the computational efficiency required for many emergency response and event reconstruction applications. We are developing a new integrated urban dispersion modeling capability, able to efficiently predict dispersion in diverse urban environments for a wide range of atmospheric conditions, temporal and spatial scales, and release event scenarios. This new computational fluid dynamics capability includes adaptive mesh refinement and it can simultaneously resolve individual buildings and high-resolution terrain (including important vegetative and land-use features), treat complex building and structural geometries (e.g., stadiums, arenas, subways, airplane interiors), and cope with the full range of atmospheric conditions (e.g. stability). We are developing approaches for seamless coupling with mesoscale numerical weather prediction models to provide realistic forcing of the urban-scale model, which is critical to its performance in real-world conditions.

  10. NUMERICAL SOLUTION FOR THE POTENTIAL AND DENSITY PROFILE OF A THERMAL EQUILIBRIUM SHEET BEAM

    SciTech Connect

    Lund, S M; Bazouin, G

    2011-03-29

    In a recent paper, S. M. Lund, A. Friedman, and G. Bazouin, Sheet beam model for intense space-charge: with application to Debye screening and the distribution of particle oscillation frequencies in a thermal equilibrium beam, in press, Phys. Rev. Special Topics - Accel. and Beams (2011), a 1D sheet beam model was extensively analyzed. In this complementary paper, we present details of a numerical procedure developed to construct the self-consistent electrostatic potential and density profile of a thermal equilibrium sheet beam distribution. This procedure effectively circumvents pathologies which can prevent use of standard numerical integration techniques when space-charge intensity is high. The procedure employs transformations and is straightforward to implement with standard numerical methods and produces accurate solutions which can be applied to thermal equilibria with arbitrarily strong space-charge intensity up to the applied focusing limit.

  11. NUMERICAL SOLUTION FOR THE POTENTIAL AND DENSITY PROFILE OF A THERMAL EQUILIBRIUM SHEET BEAM

    SciTech Connect

    Bazouin, Steven M. Lund, Guillaume; Bazouin, Guillaume

    2011-04-01

    In a recent paper, S. M. Lund, A. Friedman, and G. Bazouin, Sheet beam model for intense space-charge: with application to Debye screening and the distribution of particle oscillation frequencies in a thermal equilibrium beam, in press, Phys. Rev. Special Topics - Accel. and Beams (2011), a 1D sheet beam model was extensively analyzed. In this complementary paper, we present details of a numerical procedure developed to construct the self-consistent electrostatic potential and density profile of a thermal equilibrium sheet beam distribution. This procedure effectively circumvents pathologies which can prevent use of standard numerical integration techniques when space-charge intensity is high. The procedure employs transformations and is straightforward to implement with standard numerical methods and produces accurate solutions which can be applied to thermal equilibria with arbitrarily strong space-charge intensity up to the applied focusing limit.

  12. Numerical modeling of the radiative transfer in a turbid medium using the synthetic iteration.

    PubMed

    Budak, Vladimir P; Kaloshin, Gennady A; Shagalov, Oleg V; Zheltov, Victor S

    2015-07-27

    In this paper we propose the fast, but the accurate algorithm for numerical modeling of light fields in the turbid media slab. For the numerical solution of the radiative transfer equation (RTE) it is required its discretization based on the elimination of the solution anisotropic part and the replacement of the scattering integral by a finite sum. The solution regular part is determined numerically. A good choice of the method of the solution anisotropic part elimination determines the high convergence of the algorithm in the mean square metric. The method of synthetic iterations can be used to improve the convergence in the uniform metric. A significant increase in the solution accuracy with the use of synthetic iterations allows applying the two-stream approximation for the regular part determination. This approach permits to generalize the proposed method in the case of an arbitrary 3D geometry of the medium.

  13. Numerical Simulation of a High Mach Number Jet Flow

    NASA Technical Reports Server (NTRS)

    Hayder, M. Ehtesham; Turkel, Eli; Mankbadi, Reda R.

    1993-01-01

    The recent efforts to develop accurate numerical schemes for transition and turbulent flows are motivated, among other factors, by the need for accurate prediction of flow noise. The success of developing high speed civil transport plane (HSCT) is contingent upon our understanding and suppression of the jet exhaust noise. The radiated sound can be directly obtained by solving the full (time-dependent) compressible Navier-Stokes equations. However, this requires computational storage that is beyond currently available machines. This difficulty can be overcome by limiting the solution domain to the near field where the jet is nonlinear and then use acoustic analogy (e.g., Lighthill) to relate the far-field noise to the near-field sources. The later requires obtaining the time-dependent flow field. The other difficulty in aeroacoustics computations is that at high Reynolds numbers the turbulent flow has a large range of scales. Direct numerical simulations (DNS) cannot obtain all the scales of motion at high Reynolds number of technological interest. However, it is believed that the large scale structure is more efficient than the small-scale structure in radiating noise. Thus, one can model the small scales and calculate the acoustically active scales. The large scale structure in the noise-producing initial region of the jet can be viewed as a wavelike nature, the net radiated sound is the net cancellation after integration over space. As such, aeroacoustics computations are highly sensitive to errors in computing the sound sources. It is therefore essential to use a high-order numerical scheme to predict the flow field. The present paper presents the first step in a ongoing effort to predict jet noise. The emphasis here is in accurate prediction of the unsteady flow field. We solve the full time-dependent Navier-Stokes equations by a high order finite difference method. Time accurate spatial simulations of both plane and axisymmetric jet are presented. Jet Mach

  14. Mean kernels to improve gravimetric geoid determination based on modified Stokes's integration

    NASA Astrophysics Data System (ADS)

    Hirt, C.

    2011-11-01

    Gravimetric geoid computation is often based on modified Stokes's integration, where Stokes's integral is evaluated with some stochastic or deterministic kernel modification. Accurate numerical evaluation of Stokes's integral requires the modified kernel to be integrated across the area of each discretised grid cell (mean kernel). Evaluating the modified kernel at the center of the cell (point kernel) is an approximation, which may result in larger numerical integration errors near the computation point, where the modified kernel exhibits a strongly nonlinear behavior. The present study deals with the computation of whole-of-the-cell mean values of modified kernels, exemplified here with the Featherstone-Evans-Olliver (1998) kernel modification [Featherstone, W.E., Evans, J.D., Olliver, J.G., 1998. A Meissl-modified Vaníček and Kleusberg kernel to reduce the truncation error in gravimetric geoid computations. Journal of Geodesy 72(3), 154-160]. We investigate two approaches (analytical and numerical integration), which are capable of providing accurate mean kernels. The analytical integration approach is based on kernel weighting factors which are used for the conversion of point to mean kernels. For the efficient numerical integration, Gauss-Legendre quadrature is applied. The comparison of mean kernels from both approaches shows a satisfactory mutual agreement at the level of 10 -4 and better, which is considered to be sufficient for practical geoid computation requirements. Closed-loop tests based on the EGM2008 geopotential model demonstrate that using mean instead of point kernels reduces numerical integration errors by ˜65%. The use of mean kernels is recommended in remove-compute-restore geoid determination with the Featherstone-Evans-Olliver (1998) kernel or any other kernel modification under the condition that the kernel changes rapidly across the cells in the neighborhood of the computation point.

  15. A line integration method for the treatment of 3D domain integrals and accelerated by the fast multipole method in the BEM

    NASA Astrophysics Data System (ADS)

    Wang, Qiao; Zhou, Wei; Cheng, Yonggang; Ma, Gang; Chang, Xiaolin

    2017-04-01

    A line integration method (LIM) is proposed to calculate the domain integrals for 3D problems. In the proposed method, the domain integrals are transformed into boundary integrals and only line integrals on straight lines are needed to be computed. A background cell structure is applied to further simplify the line integrals and improve the accuracy. The method creates elements only on the boundary, and the integral lines are created from the boundary elements. The procedure is quite suitable for the boundary element method, and we have applied it to 3D situations. Directly applying the method is time-consuming since the complexity of the computational time is O( NM), where N and M are the numbers of nodes and lines, respectively. To overcome this problem, the fast multipole method is used with the LIM for large-scale computation. The numerical results show that the proposed method is efficient and accurate.

  16. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  17. Surface integral formulations for the design of plasmonic nanostructures.

    PubMed

    Forestiere, Carlo; Iadarola, Giovanni; Rubinacci, Guglielmo; Tamburrino, Antonello; Dal Negro, Luca; Miano, Giovanni

    2012-11-01

    Numerical formulations based on surface integral equations (SIEs) provide an accurate and efficient framework for the solution of the electromagnetic scattering problem by three-dimensional plasmonic nanostructures in the frequency domain. In this paper, we present a unified description of SIE formulations with both singular and nonsingular kernel and we study their accuracy in solving the scattering problem by metallic nanoparticles with spherical and nonspherical shape. In fact, the accuracy of the numerical solution, especially in the near zone, is of great importance in the analysis and design of plasmonic nanostructures, whose operation critically depends on the manipulation of electromagnetic hot spots. Four formulation types are considered: the N-combined region integral equations, the T-combined region integral equations, the combined field integral equations and the null field integral equations. A detailed comparison between their numerical solutions obtained for several nanoparticle shapes is performed by examining convergence rate and accuracy in both the far and near zone of the scatterer as a function of the number of degrees of freedom. A rigorous analysis of SIE formulations and their limitations can have a high impact on the engineering of numerous nano-scale optical devices such as plasmon-enhanced light emitters, biosensors, photodetectors, and nanoantennas.

  18. Numerical Stimulation of Multicomponent Chromatography Using Spreadsheets.

    ERIC Educational Resources Information Center

    Frey, Douglas D.

    1990-01-01

    Illustrated is the use of spreadsheet programs for implementing finite difference numerical simulations of chromatography as an instructional tool in a separations course. Discussed are differential equations, discretization and integration, spreadsheet development, computer requirements, and typical simulation results. (CW)

  19. Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping

    NASA Astrophysics Data System (ADS)

    Rehak, M.; Skaloud, J.

    2015-08-01

    In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  20. Groundtruth approach to accurate quantitation of fluorescence microarrays

    SciTech Connect

    Mascio-Kegelmeyer, L; Tomascik-Cheeseman, L; Burnett, M S; van Hummelen, P; Wyrobek, A J

    2000-12-01

    To more accurately measure fluorescent signals from microarrays, we calibrated our acquisition and analysis systems by using groundtruth samples comprised of known quantities of red and green gene-specific DNA probes hybridized to cDNA targets. We imaged the slides with a full-field, white light CCD imager and analyzed them with our custom analysis software. Here we compare, for multiple genes, results obtained with and without preprocessing (alignment, color crosstalk compensation, dark field subtraction, and integration time). We also evaluate the accuracy of various image processing and analysis techniques (background subtraction, segmentation, quantitation and normalization). This methodology calibrates and validates our system for accurate quantitative measurement of microarrays. Specifically, we show that preprocessing the images produces results significantly closer to the known ground-truth for these samples.

  1. Numerical Propulsion System Simulation

    NASA Technical Reports Server (NTRS)

    Naiman, Cynthia

    2006-01-01

    The NASA Glenn Research Center, in partnership with the aerospace industry, other government agencies, and academia, is leading the effort to develop an advanced multidisciplinary analysis environment for aerospace propulsion systems called the Numerical Propulsion System Simulation (NPSS). NPSS is a framework for performing analysis of complex systems. The initial development of NPSS focused on the analysis and design of airbreathing aircraft engines, but the resulting NPSS framework may be applied to any system, for example: aerospace, rockets, hypersonics, power and propulsion, fuel cells, ground based power, and even human system modeling. NPSS provides increased flexibility for the user, which reduces the total development time and cost. It is currently being extended to support the NASA Aeronautics Research Mission Directorate Fundamental Aeronautics Program and the Advanced Virtual Engine Test Cell (AVETeC). NPSS focuses on the integration of multiple disciplines such as aerodynamics, structure, and heat transfer with numerical zooming on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS development includes capabilities to facilitate collaborative engineering. The NPSS will provide improved tools to develop custom components and to use capability for zooming to higher fidelity codes, coupling to multidiscipline codes, transmitting secure data, and distributing simulations across different platforms. These powerful capabilities extend NPSS from a zero-dimensional simulation tool to a multi-fidelity, multidiscipline system-level simulation tool for the full development life cycle.

  2. Series expansions for the incomplete Lipschitz-Hankel integralYe 0(a, z)

    NASA Astrophysics Data System (ADS)

    Mechaik, Mehdi M.; Dvorak, Steven L.

    1996-03-01

    Three series expansions are derived for the incomplete Lipschitz-Hankel integral YeO(a, z) for complex-valued a and z. Two novel expansions are obtained by using contour integration techniques to evaluate the inverse Laplace transform representation for YeO(a, z). A third expansion is obtained by replacing the Neumann function by its Neumann series representation and integrating the resulting terms. An algorithm is outlined which chooses the most efficient expansion for given values of a and z. Comparisons of numerical results for these series expansions with those obtained by using numerical integration routines show that the expansions are very efficient and yield accurate results even for values of a and z for which numerical integration fails to converge. The integral representations for YeO(a, z) obtained in this paper are combined with previously obtained integral representations for Jeo(a, z) to derive integral representations for HeO(1) (a, z) and HeO(2) (α, z). Recurrence relations can be used to efficiently compute higher-order incomplete Lipschitz-Hankel integrals and to find integral representations and series expansions for these special functions and many other related functions.

  3. Selecting MODFLOW cell sizes for accurate flow fields.

    PubMed

    Haitjema, H; Kelson, V; de Lange, W

    2001-01-01

    Contaminant transport models often use a velocity field derived from a MODFLOW flow field. Consequently, the accuracy of MODFLOW in representing a ground water flow field determines in part the accuracy of the transport predictions, particularly when advective transport is dominant. We compared MODFLOW ground water flow rates and MODPATH particle traces (advective transport) for a variety of conceptual models and different grid spacings to exact or approximate analytic solutions. All of our numerical experiments concerned flow in a single confined or semiconfined aquifer. While MODFLOW appeared robust in terms of both local and global water balance, we found that ground water flow rates, particle traces, and associated ground water travel times are accurate only when sufficiently small cells are used. For instance, a minimum of four or five cells are required to accurately model total ground water inflow in tributaries or other narrow surface water bodies that end inside the model domain. Also, about 50 cells are needed to represent zones of differing transmissivities or an incorrect flow field and (locally) inaccurate ground water travel times may result. Finally, to adequately represent leakage through aquitards or through the bottom of surface water bodies it was found that the maximum allowable cell dimensions should not exceed a characteristic leakage length lambda, which is defined as the square root of the aquifer transmissivity times the resistance of the aquitard or stream bottom. In some cases a cell size of one-tenth of lambda is necessary to obtain accurate results.

  4. Accurate thermoelastic tensor and acoustic velocities of NaCl

    NASA Astrophysics Data System (ADS)

    Marcondes, Michel L.; Shukla, Gaurav; da Silveira, Pedro; Wentzcovitch, Renata M.

    2015-12-01

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  5. Accurate thermoelastic tensor and acoustic velocities of NaCl

    SciTech Connect

    Marcondes, Michel L.; Shukla, Gaurav; Silveira, Pedro da; Wentzcovitch, Renata M.

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  6. Dynamical correction of control laws for marine ships' accurate steering

    NASA Astrophysics Data System (ADS)

    Veremey, Evgeny I.

    2014-06-01

    The objective of this work is the analytical synthesis problem for marine vehicles autopilots design. Despite numerous known methods for a solution, the mentioned problem is very complicated due to the presence of an extensive population of certain dynamical conditions, requirements and restrictions, which must be satisfied by the appropriate choice of a steering control law. The aim of this paper is to simplify the procedure of the synthesis, providing accurate steering with desirable dynamics of the control system. The approach proposed here is based on the usage of a special unified multipurpose control law structure that allows decoupling a synthesis into simpler particular optimization problems. In particular, this structure includes a dynamical corrector to support the desirable features for the vehicle's motion under the action of sea wave disturbances. As a result, a specialized new method for the corrector design is proposed to provide an accurate steering or a trade-off between accurate steering and economical steering of the ship. This method guaranties a certain flexibility of the control law with respect to an actual environment of the sailing; its corresponding turning can be realized in real time onboard.

  7. Structure and Construction of Numeric Databases.

    ERIC Educational Resources Information Center

    Soergel, Dagobert

    1982-01-01

    This discussion of the general principles of structure and construction of numeric databases introduces the concept of data points, their relationship to each other, and their storage in a nonredundant way. The collection of numeric data and their integration into the database structure are explained. Eight references are cited. (EJS)

  8. Personalized numerical observer

    NASA Astrophysics Data System (ADS)

    Brankov, Jovan G.; Pretorius, P. Hendrik

    2010-02-01

    It is widely accepted that medical image quality should be assessed using task-based criteria, such as humanobserver (HO) performance in a lesion-detection (scoring) task. HO studies are time consuming and cost prohibitive to be used for image quality assessment during development of either reconstruction methods or imaging systems. Therefore, a numerical observer (NO), a HO surrogate, is highly desirable. In the past, we have proposed and successfully tested a NO based on a supervised-learning approach (namely a support vector machine) for cardiac gated SPECT image quality assessment. In the supervised-learning approach, the goal is to identify the relationship between measured image features and HO myocardium defect likelihood scores. Thus far we have treated multiple HO readers by simply averaging or pooling their respective scores. Due to observer variability, this may be suboptimal and less accurate. Therefore, in this work, we are setting our goal to predict individual observer scores independently in the hope to better capture some relevant lesion-detection mechanism of the human observers. This is even more important as there are many ways to get equivalent observer performance (measured by area under receiver operating curve), and simply predicting some joint (average or pooled) score alone is not likely to succeed.

  9. Numerical Relativity and Astrophysics

    NASA Astrophysics Data System (ADS)

    Lehner, Luis; Pretorius, Frans

    2014-08-01

    Throughout the Universe many powerful events are driven by strong gravitational effects that require general relativity to fully describe them. These include compact binary mergers, black hole accretion, and stellar collapse, where velocities can approach the speed of light and extreme gravitational fields (ΦNewt/c2≃1) mediate the interactions. Many of these processes trigger emission across a broad range of the electromagnetic spectrum. Compact binaries further source strong gravitational wave emission that could directly be detected in the near future. This feat will open up a gravitational wave window into our Universe and revolutionize our understanding of it. Describing these phenomena requires general relativity, and—where dynamical effects strongly modify gravitational fields—the full Einstein equations coupled to matter sources. Numerical relativity is a field within general relativity concerned with studying such scenarios that cannot be accurately modeled via perturbative or analytical calculations. In this review, we examine results obtained within this discipline, with a focus on its impact in astrophysics.

  10. Toward an accurate and efficient semiclassical surface hopping procedure for nonadiabatic problems.

    PubMed

    Herman, Michael F

    2005-10-20

    The derivation of a semiclassical surface hopping procedure from a formally exact solution of the Schrodinger equation is discussed. The fact that the derivation proceeds from an exact solution guarantees that all phase terms are completely and accurately included. Numerical evidence shows the method to be highly accurate. A Monte Carlo implementation of this method is considered, and recent work to significantly improve the statistical accuracy of the Monte Carlo approach is discussed.

  11. Numerical Studies and Equipment Development for Single Point Incremental Forming

    NASA Astrophysics Data System (ADS)

    Marabuto, S. R.; Sena, J. I. V.; Afonso, D.; Martins, M. A. B. E.; Coelho, R. M.; Ferreira, J. A. F.; Valente, R. A. F.; de Sousa, R. J. Alves

    2011-05-01

    This paper summarizes the achievements obtained so far in the context of a research project carried out at the University of Aveiro, Portugal on both numerical and experimental viewpoints concerning Single Point Incremental Forming (SPIF). On the experimental side, the general guidelines on the development of a new SPIF machine are detailed. The innovation features are related to the choice of a six-degrees-of-freedom, parallel kinematics machine, with a high payload, to broad the range of materials to be tested, and allowing for a higher flexibility on tool-path generation. On the numerical side, preliminary results on simulation of SPIF processes resorting to an innovative solid-shell finite element are presented. The final target is an accurate and fast simulation of SPIF processes by means of numerical methods. Accuracy is obtained through the use of a finite element accounting for three-dimensional stress and strain fields. The developed formulation allows for an unlimited number of integration points through its thickness direction, which promotes accuracy without loss of CPU efficiency. Preliminary results and designs are shown and discussions over the obtained solutions are provided in order to further improve the research framework.

  12. Numerical Methods of Computational Electromagnetics for Complex Inhomogeneous Systems

    SciTech Connect

    Cai, Wei

    2014-05-15

    Understanding electromagnetic phenomena is the key in many scientific investigation and engineering designs such as solar cell designs, studying biological ion channels for diseases, and creating clean fusion energies, among other things. The objectives of the project are to develop high order numerical methods to simulate evanescent electromagnetic waves occurring in plasmon solar cells and biological ion-channels, where local field enhancement within random media in the former and long range electrostatic interactions in the latter are of major challenges for accurate and efficient numerical computations. We have accomplished these objectives by developing high order numerical methods for solving Maxwell equations such as high order finite element basis for discontinuous Galerkin methods, well-conditioned Nedelec edge element method, divergence free finite element basis for MHD, and fast integral equation methods for layered media. These methods can be used to model the complex local field enhancement in plasmon solar cells. On the other hand, to treat long range electrostatic interaction in ion channels, we have developed image charge based method for a hybrid model in combining atomistic electrostatics and continuum Poisson-Boltzmann electrostatics. Such a hybrid model will speed up the molecular dynamics simulation of transport in biological ion-channels.

  13. Numerical simulation and analysis of accurate blood oxygenation measurement by using optical resolution photoacoustic microscopy

    NASA Astrophysics Data System (ADS)

    Yu, Tianhao; Li, Qian; Li, Lin; Zhou, Chuanqing

    2016-10-01

    Accuracy of photoacoustic signal is the crux on measurement of oxygen saturation in functional photoacoustic imaging, which is influenced by factors such as defocus of laser beam, curve shape of large vessels and nonlinear saturation effect of optical absorption in biological tissues. We apply Monte Carlo model to simulate energy deposition in tissues and obtain photoacoustic signals reaching a simulated focused surface detector to investigate corresponding influence of these factors. We also apply compensation on photoacoustic imaging of in vivo cat cerebral cortex blood vessels, in which signals from different lateral positions of vessels are corrected based on simulation results. And this process on photoacoustic images can improve the smoothness and accuracy of oxygen saturation results.

  14. Pre-Stall Behavior of a Transonic Axial Compressor Stage via Time-Accurate Numerical Simulation

    NASA Technical Reports Server (NTRS)

    Chen, Jen-Ping; Hathaway, Michael D.; Herrick, Gregory P.

    2008-01-01

    CFD calculations using high-performance parallel computing were conducted to simulate the pre-stall flow of a transonic compressor stage, NASA compressor Stage 35. The simulations were run with a full-annulus grid that models the 3D, viscous, unsteady blade row interaction without the need for an artificial inlet distortion to induce stall. The simulation demonstrates the development of the rotating stall from the growth of instabilities. Pressure-rise performance and pressure traces are compared with published experimental data before the study of flow evolution prior to the rotating stall. Spatial FFT analysis of the flow indicates a rotating long-length disturbance of one rotor circumference, which is followed by a spike-type breakdown. The analysis also links the long-length wave disturbance with the initiation of the spike inception. The spike instabilities occur when the trajectory of the tip clearance flow becomes perpendicular to the axial direction. When approaching stall, the passage shock changes from a single oblique shock to a dual-shock, which distorts the perpendicular trajectory of the tip clearance vortex but shows no evidence of flow separation that may contribute to stall.

  15. Numerical time-dependent solutions of the Schrödinger equation with piecewise continuous potentials.

    PubMed

    van Dijk, Wytse

    2016-06-01

    We consider accurate numerical solutions of the one-dimensional time-dependent Schrödinger equation when the potential is piecewise continuous. Spatial step sizes are defined for each of the regions between the discontinuities and a matching condition at the boundaries of the regions is employed. The Numerov method for spatial integration is particularly appropriate to this approach. By employing Padé approximants for the time-evolution operator, we obtain solutions with significantly improved precision without increased CPU time. This approach is also appropriate for adaptive changes in spatial step size even when there is no discontinuity of the potential.

  16. Numerical simulation of heat exchanger

    SciTech Connect

    Sha, W.T.

    1985-01-01

    Accurate and detailed knowledge of the fluid flow field and thermal distribution inside a heat exchanger becomes invaluable as a large, efficient, and reliable unit is sought. This information is needed to provide proper evaluation of the thermal and structural performance characteristics of a heat exchanger. It is to be noted that an analytical prediction method, when properly validated, will greatly reduce the need for model testing, facilitate interpolating and extrapolating test data, aid in optimizing heat-exchanger design and performance, and provide scaling capability. Thus tremendous savings of cost and time are realized. With the advent of large digital computers and advances in the development of computational fluid mechanics, it has become possible to predict analytically, through numerical solution, the conservation equations of mass, momentum, and energy for both the shellside and tubeside fluids. The numerical modeling technique will be a valuable, cost-effective design tool for development of advanced heat exchangers.

  17. Sledge-Hammer Integration

    ERIC Educational Resources Information Center

    Ahner, Henry

    2009-01-01

    Integration (here visualized as a pounding process) is mathematically realized by simple transformations, successively smoothing the bounding curve into a straight line and the region-to-be-integrated into an area-equivalent rectangle. The relationship to Riemann sums, and to the trapezoid and midpoint methods of numerical integration, is…

  18. Entropy Splitting for High Order Numerical Simulation of Compressible Turbulence

    NASA Technical Reports Server (NTRS)

    Sandham, N. D.; Yee, H. C.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    A stable high order numerical scheme for direct numerical simulation (DNS) of shock-free compressible turbulence is presented. The method is applicable to general geometries. It contains no upwinding, artificial dissipation, or filtering. Instead the method relies on the stabilizing mechanisms of an appropriate conditioning of the governing equations and the use of compatible spatial difference operators for the interior points (interior scheme) as well as the boundary points (boundary scheme). An entropy splitting approach splits the inviscid flux derivatives into conservative and non-conservative portions. The spatial difference operators satisfy a summation by parts condition leading to a stable scheme (combined interior and boundary schemes) for the initial boundary value problem using a generalized energy estimate. A Laplacian formulation of the viscous and heat conduction terms on the right hand side of the Navier-Stokes equations is used to ensure that any tendency to odd-even decoupling associated with central schemes can be countered by the fluid viscosity. A special formulation of the continuity equation is used, based on similar arguments. The resulting methods are able to minimize spurious high frequency oscillation producing nonlinear instability associated with pure central schemes, especially for long time integration simulation such as DNS. For validation purposes, the methods are tested in a DNS of compressible turbulent plane channel flow at a friction Mach number of 0.1 where a very accurate turbulence data base exists. It is demonstrated that the methods are robust in terms of grid resolution, and in good agreement with incompressible channel data, as expected at this Mach number. Accurate turbulence statistics can be obtained with moderate grid sizes. Stability limits on the range of the splitting parameter are determined from numerical tests.

  19. The Numerical Analysis of a Turbulent Compressible Jet. Degree awarded by Ohio State Univ., 2000

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2001-01-01

    A numerical method to simulate high Reynolds number jet flows was formulated and applied to gain a better understanding of the flow physics. Large-eddy simulation was chosen as the most promising approach to model the turbulent structures due to its compromise between accuracy and computational expense. The filtered Navier-Stokes equations were developed including a total energy form of the energy equation. Subgrid scale models for the momentum and energy equations were adapted from compressible forms of Smagorinsky's original model. The effect of using disparate temporal and spatial accuracy in a numerical scheme was discovered through one-dimensional model problems and a new uniformly fourth-order accurate numerical method was developed. Results from two- and three-dimensional validation exercises show that the code accurately reproduces both viscous and inviscid flows. Numerous axisymmetric jet simulations were performed to investigate the effect of grid resolution, numerical scheme, exit boundary conditions and subgrid scale modeling on the solution and the results were used to guide the three-dimensional calculations. Three-dimensional calculations of a Mach 1.4 jet showed that this LES simulation accurately captures the physics of the turbulent flow. The agreement with experimental data was relatively good and is much better than results in the current literature. Turbulent intensities indicate that the turbulent structures at this level of modeling are not isotropic and this information could lend itself to the development of improved subgrid scale models for LES and turbulence models for RANS simulations. A two point correlation technique was used to quantify the turbulent structures. Two point space correlations were used to obtain a measure of the integral length scale, which proved to be approximately 1/2 D(sub j). Two point space-time correlations were used to obtain the convection velocity for the turbulent structures. This velocity ranged from 0.57 to

  20. Review of The SIAM 100-Digit Challenge: A Study in High-Accuracy Numerical Computing

    SciTech Connect

    Bailey, David

    2005-01-25

    In the January 2002 edition of SIAM News, Nick Trefethen announced the '$100, 100-Digit Challenge'. In this note he presented ten easy-to-state but hard-to-solve problems of numerical analysis, and challenged readers to find each answer to ten-digit accuracy. Trefethen closed with the enticing comment: 'Hint: They're hard! If anyone gets 50 digits in total, I will be impressed.' This challenge obviously struck a chord in hundreds of numerical mathematicians worldwide, as 94 teams from 25 nations later submitted entries. Many of these submissions exceeded the target of 50 correct digits; in fact, 20 teams achieved a perfect score of 100 correct digits. Trefethen had offered $100 for the best submission. Given the overwhelming response, a generous donor (William Browning, founder of Applied Mathematics, Inc.) provided additional funds to provide a $100 award to each of the 20 winning teams. Soon after the results were out, four participants, each from a winning team, got together and agreed to write a book about the problems and their solutions. The team is truly international: Bornemann is from Germany, Laurie is from South Africa, Wagon is from the USA, and Waldvogel is from Switzerland. This book provides some mathematical background for each problem, and then shows in detail how each of them can be solved. In fact, multiple solution techniques are mentioned in each case. The book describes how to extend these solutions to much larger problems and much higher numeric precision (hundreds or thousands of digit accuracy). The authors also show how to compute error bounds for the results, so that one can say with confidence that one's results are accurate to the level stated. Numerous numerical software tools are demonstrated in the process, including the commercial products Mathematica, Maple and Matlab. Computer programs that perform many of the algorithms mentioned in the book are provided, both in an appendix to the book and on a website. In the process, the

  1. Efficient implementation of the Hiller-Sucher-Feinberg identity for the accurate determination of the electron density

    NASA Astrophysics Data System (ADS)

    Challacombe, Matt; Cioslowski, Jerzy

    1994-01-01

    A new, highly optimized implementation of the Hiller-Sucher-Feinberg (HSF) identity is presented. The HSF identity, when applied to molecular wave functions calculated with Gaussian-type basis functions, not only improves the overall accuracy of the electron density by more than an order of magnitude, but also yields approximate cusps at nuclei. The three classes of molecular integrals, L, U, and V, which are encountered in the calculation of the HSF density, are derived in compact form. Efficient algorithms for the accurate evaluation of these integrals are detailed, including a novel approach to the necessary numerical quadratures and the thresholding of two-electron V integrals. Hartree-Fock (HF) electron densities calculated with both the conventional definition and from the HSF identity are compared to their respective HF limits for a variety of diatomic molecules and basis sets. The average error in the calculated HSF electron densities at non-hydrogen nuclei equals 0.17%, which constitutes a marked improvement over an error of 5.77% in the conventional densities.

  2. Velocity field calculation for non-orthogonal numerical grids

    SciTech Connect

    Flach, G. P.

    2015-03-01

    Computational grids containing cell faces that do not align with an orthogonal (e.g. Cartesian, cylindrical) coordinate system are routinely encountered in porous-medium numerical simulations. Such grids are referred to in this study as non-orthogonal grids because some cell faces are not orthogonal to a coordinate system plane (e.g. xy, yz or xz plane in Cartesian coordinates). Non-orthogonal grids are routinely encountered at the Savannah River Site in porous-medium flow simulations for Performance Assessments and groundwater flow modeling. Examples include grid lines that conform to the sloping roof of a waste tank or disposal unit in a 2D Performance Assessment simulation, and grid surfaces that conform to undulating stratigraphic surfaces in a 3D groundwater flow model. Particle tracking is routinely performed after a porous-medium numerical flow simulation to better understand the dynamics of the flow field and/or as an approximate indication of the trajectory and timing of advective solute transport. Particle tracks are computed by integrating the velocity field from cell to cell starting from designated seed (starting) positions. An accurate velocity field is required to attain accurate particle tracks. However, many numerical simulation codes report only the volumetric flowrate (e.g. PORFLOW) and/or flux (flowrate divided by area) crossing cell faces. For an orthogonal grid, the normal flux at a cell face is a component of the Darcy velocity vector in the coordinate system, and the pore velocity for particle tracking is attained by dividing by water content. For a non-orthogonal grid, the flux normal to a cell face that lies outside a coordinate plane is not a true component of velocity with respect to the coordinate system. Nonetheless, normal fluxes are often taken as Darcy velocity components, either naively or with accepted approximation. To enable accurate particle tracking or otherwise present an accurate depiction of the velocity field for a non

  3. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  4. Hybrid function method for solving Fredholm and Volterra integral equations of the second kind

    NASA Astrophysics Data System (ADS)

    Hsiao, Chun-Hui

    2009-08-01

    Numerical solutions of Fredholm and Volterra integral equations of the second kind via hybrid functions, are proposed in this paper. Based upon some useful properties of hybrid functions, integration of the cross product, a special product matrix and a related coefficient matrix with optimal order, are applied to solve these integral equations. The main characteristic of this technique is to convert an integral equation into an algebraic; hence, the solution procedures are either reduced or simplified accordingly. The advantages of hybrid functions are that the values of n and m are adjustable as well as being able to yield more accurate numerical solutions than the piecewise constant orthogonal function, for the solutions of integral equations. We propose that the available optimal values of n and m can minimize the relative errors of the numerical solutions. The high accuracy and the wide applicability of the hybrid function approach will be demonstrated with numerical examples. The hybrid function method is superior to other piecewise constant orthogonal functions [W.F. Blyth, R.L. May, P. Widyaningsih, Volterra integral equations solved in Fredholm form using Walsh functions, Anziam J. 45 (E) (2004) C269-C282; M.H. Reihani, Z. Abadi, Rationalized Haar functions method for solving Fredholm and Volterra integral equations, J. Comp. Appl. Math. 200 (2007) 12-20] for these problems.

  5. Conservative high-order-accurate finite-difference methods for curvilinear grids

    NASA Technical Reports Server (NTRS)

    Rai, Man M.; Chakrvarthy, Sukumar

    1993-01-01

    Two fourth-order-accurate finite-difference methods for numerically solving hyperbolic systems of conservation equations on smooth curvilinear grids are presented. The first method uses the differential form of the conservation equations; the second method uses the integral form of the conservation equations. Modifications to these schemes, which are required near boundaries to maintain overall high-order accuracy, are discussed. An analysis that demonstrates the stability of the modified schemes is also provided. Modifications to one of the schemes to make it total variation diminishing (TVD) are also discussed. Results that demonstrate the high-order accuracy of both schemes are included in the paper. In particular, a Ringleb-flow computation demonstrates the high-order accuracy and the stability of the boundary and near-boundary procedures. A second computation of supersonic flow over a cylinder demonstrates the shock-capturing capability of the TVD methodology. An important contribution of this paper is the dear demonstration that higher order accuracy leads to increased computational efficiency.

  6. Intrabeam scattering formulas for fast numerical evaluation

    SciTech Connect

    Nagaitsev, Sergei; /Fermilab

    2005-03-01

    Expressions for small-angle multiple intrabeam scattering (IBS) emittance growth rates are normally expressed through integrals, which require a numeric evaluation at various locations of the accelerator lattice. In this paper, I demonstrate that the IBS growth rates can be presented in closed-form expressions with the help of the so-called symmetric elliptic integral. This integral can be evaluated numerically by a very efficient recursive method by employing the duplication theorem. Several examples of IBS rates for a smooth-lattice approximation, equal transverse temperatures and plasma temperature relaxation are given.

  7. Tensor numerical methods in quantum chemistry: from Hartree-Fock to excitation energies.

    PubMed

    Khoromskaia, Venera; Khoromskij, Boris N

    2015-12-21

    We resume the recent successes of the grid-based tensor numerical methods and discuss their prospects in real-space electronic structure calculations. These methods, based on the low-rank representation of the multidimensional functions and integral operators, first appeared as an accurate tensor calculus for the 3D Hartree potential using 1D complexity operations, and have evolved to entirely grid-based tensor-structured 3D Hartree-Fock eigenvalue solver. It benefits from tensor calculation of the core Hamiltonian and two-electron integrals (TEI) in O(n log n) complexity using the rank-structured approximation of basis functions, electron densities and convolution integral operators all represented on 3D n × n × n Cartesian grids. The algorithm for calculating TEI tensor in a form of the Cholesky decomposition is based on multiple factorizations using algebraic 1D "density fitting" scheme, which yield an almost irreducible number of product basis functions involved in the 3D convolution integrals, depending on a threshold ε > 0. The basis functions are not restricted to separable Gaussians, since the analytical integration is substituted by high-precision tensor-structured numerical quadratures. The tensor approaches to post-Hartree-Fock calculations for the MP2 energy correction and for the Bethe-Salpeter excitation energies, based on using low-rank factorizations and the reduced basis method, were recently introduced. Another direction is towards the tensor-based Hartree-Fock numerical scheme for finite lattices, where one of the numerical challenges is the summation of electrostatic potentials of a large number of nuclei. The 3D grid-based tensor method for calculation of a potential sum on a L × L × L lattice manifests the linear in L computational work, O(L), instead of the usual O(L(3) log L) scaling by the Ewald-type approaches.

  8. Numerical simulations of cryogenic cavitating flows

    NASA Astrophysics Data System (ADS)

    Kim, Hyunji; Kim, Hyeongjun; Min, Daeho; Kim, Chongam

    2015-12-01

    The present study deals with a numerical method for cryogenic cavitating flows. Recently, we have developed an accurate and efficient baseline numerical scheme for all-speed water-gas two-phase flows. By extending such progress, we modify the numerical dissipations to be properly scaled so that it does not show any deficiencies in low Mach number regions. For dealing with cryogenic two-phase flows, previous EOS-dependent shock discontinuity sensing term is replaced with a newly designed EOS-free one. To validate the proposed numerical method, cryogenic cavitating flows around hydrofoil are computed and the pressure and temperature depression effect in cryogenic cavitation are demonstrated. Compared with Hord's experimental data, computed results are turned out to be satisfactory. Afterwards, numerical simulations of flow around KARI turbopump inducer in liquid rocket are carried out under various flow conditions with water and cryogenic fluids, and the difference in inducer flow physics depending on the working fluids are examined.

  9. Recent advances in numerical PDEs

    NASA Astrophysics Data System (ADS)

    Zuev, Julia Michelle

    In this thesis, we investigate four neighboring topics, all in the general area of numerical methods for solving Partial Differential Equations (PDEs). Topic 1. Radial Basis Functions (RBF) are widely used for multi-dimensional interpolation of scattered data. This methodology offers smooth and accurate interpolants, which can be further refined, if necessary, by clustering nodes in select areas. We show, however, that local refinements with RBF (in a constant shape parameter [varepsilon] regime) may lead to the oscillatory errors associated with the Runge phenomenon (RP). RP is best known in the case of high-order polynomial interpolation, where its effects can be accurately predicted via Lebesgue constant L (which is based solely on the node distribution). We study the RP and the applicability of Lebesgue constant (as well as other error measures) in RBF interpolation. Mainly, we allow for a spatially variable shape parameter, and demonstrate how it can be used to suppress RP-like edge effects and to improve the overall stability and accuracy. Topic 2. Although not as versatile as RBFs, cubic splines are useful for interpolating grid-based data. In 2-D, we consider a patch representation via Hermite basis functions s i,j ( u, v ) = [Special characters omitted.] h mn H m ( u ) H n ( v ), as opposed to the standard bicubic representation. Stitching requirements for the rectangular non-equispaced grid yield a 2-D tridiagonal linear system AX = B, where X represents the unknown first derivatives. We discover that the standard methods for solving this NxM system do not take advantage of the spline-specific format of the matrix B. We develop an alternative approach using this specialization of the RHS, which allows us to pre-compute coefficients only once, instead of N times. MATLAB implementation of our fast 2-D cubic spline algorithm is provided. We confirm analytically and numerically that for large N ( N > 200), our method is at least 3 times faster than the

  10. Nonlinear dynamics and numerical uncertainties in CFD

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.

    1996-01-01

    The application of nonlinear dynamics to improve the understanding of numerical uncertainties in computational fluid dynamics (CFD) is reviewed. Elementary examples in the use of dynamics to explain the nonlinear phenomena and spurious behavior that occur in numerics are given. The role of dynamics in the understanding of long time behavior of numerical integrations and the nonlinear stability, convergence, and reliability of using time-marching, approaches for obtaining steady-state numerical solutions in CFD is explained. The study is complemented with spurious behavior observed in CFD computations.

  11. Pair correlation function integrals: Computation and use

    NASA Astrophysics Data System (ADS)

    Wedberg, Rasmus; O'Connell, John P.; Peters, Günther H.; Abildskov, Jens

    2011-08-01

    We describe a method for extending radial distribution functions obtained from molecular simulations of pure and mixed molecular fluids to arbitrary distances. The method allows total correlation function integrals to be reliably calculated from simulations of relatively small systems. The long-distance behavior of radial distribution functions is determined by requiring that the corresponding direct correlation functions follow certain approximations at long distances. We have briefly described the method and tested its performance in previous communications [R. Wedberg, J. P. O'Connell, G. H. Peters, and J. Abildskov, Mol. Simul. 36, 1243 (2010);, 10.1080/08927020903536366 Fluid Phase Equilib. 302, 32 (2011)], 10.1016/j.fluid.2010.10.004, but describe here its theoretical basis more thoroughly and derive long-distance approximations for the direct correlation functions. We describe the numerical implementation of the method in detail, and report numerical tests complementing previous results. Pure molecular fluids are here studied in the isothermal-isobaric ensemble with isothermal compressibilities evaluated from the total correlation function integrals and compared with values derived from volume fluctuations. For systems where the radial distribution function has structure beyond the sampling limit imposed by the system size, the integration is more reliable, and usually more accurate, than simple integral truncation.

  12. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  13. A Riemann-Hilbert problem for the finite-genus solutions of the KdV equation and its numerical solution

    NASA Astrophysics Data System (ADS)

    Trogdon, Thomas; Deconinck, Bernard

    2013-05-01

    We derive a Riemann-Hilbert problem satisfied by the Baker-Akhiezer function for the finite-gap solutions of the Korteweg-de Vries (KdV) equation. As usual for Riemann-Hilbert problems associated with solutions of integrable equations, this formulation has the benefit that the space and time dependence appears in an explicit, linear and computable way. We make use of recent advances in the numerical solution of Riemann-Hilbert problems to produce an efficient and uniformly accurate numerical method for computing all periodic and quasi-periodic finite-genus solutions of the KdV equation.

  14. Direct computation of parameters for accurate polarizable force fields

    SciTech Connect

    Verstraelen, Toon Vandenbrande, Steven; Ayers, Paul W.

    2014-11-21

    We present an improved electronic linear response model to incorporate polarization and charge-transfer effects in polarizable force fields. This model is a generalization of the Atom-Condensed Kohn-Sham Density Functional Theory (DFT), approximated to second order (ACKS2): it can now be defined with any underlying variational theory (next to KS-DFT) and it can include atomic multipoles and off-center basis functions. Parameters in this model are computed efficiently as expectation values of an electronic wavefunction, obviating the need for their calibration, regularization, and manual tuning. In the limit of a complete density and potential basis set in the ACKS2 model, the linear response properties of the underlying theory for a given molecular geometry are reproduced exactly. A numerical validation with a test set of 110 molecules shows that very accurate models can already be obtained with fluctuating charges and dipoles. These features greatly facilitate the development of polarizable force fields.

  15. Accurate Anisotropic Fast Marching for Diffusion-Based Geodesic Tractography

    PubMed Central

    Jbabdi, S.; Bellec, P.; Toro, R.; Daunizeau, J.; Pélégrini-Issac, M.; Benali, H.

    2008-01-01

    Using geodesics for inferring white matter fibre tracts from diffusion-weighted MR data is an attractive method for at least two reasons: (i) the method optimises a global criterion, and hence is less sensitive to local perturbations such as noise or partial volume effects, and (ii) the method is fast, allowing to infer on a large number of connexions in a reasonable computational time. Here, we propose an improved fast marching algorithm to infer on geodesic paths. Specifically, this procedure is designed to achieve accurate front propagation in an anisotropic elliptic medium, such as DTI data. We evaluate the numerical performance of this approach on simulated datasets, as well as its robustness to local perturbation induced by fiber crossing. On real data, we demonstrate the feasibility of extracting geodesics to connect an extended set of brain regions. PMID:18299703

  16. Accurate Guitar Tuning by Cochlear Implant Musicians

    PubMed Central

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  17. New model accurately predicts reformate composition

    SciTech Connect

    Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )

    1994-01-31

    Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.

  18. Accurate colorimetric feedback for RGB LED clusters

    NASA Astrophysics Data System (ADS)

    Man, Kwong; Ashdown, Ian

    2006-08-01

    We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.

  19. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  20. Numerical methods for the stochastic Landau-Lifshitz Navier-Stokes equations.

    PubMed

    Bell, John B; Garcia, Alejandro L; Williams, Sarah A

    2007-07-01

    The Landau-Lifshitz Navier-Stokes (LLNS) equations incorporate thermal fluctuations into macroscopic hydrodynamics by using stochastic fluxes. This paper examines explicit Eulerian discretizations of the full LLNS equations. Several computational fluid dynamics approaches are considered (including MacCormack's two-step Lax-Wendroff scheme and the piecewise parabolic method) and are found to give good results for the variance of momentum fluctuations. However, neither of these schemes accurately reproduces the fluctuations in energy or density. We introduce a conservative centered scheme with a third-order Runge-Kutta temporal integrator that does accurately produce fluctuations in density, energy, and momentum. A variety of numerical tests, including the random walk of a standing shock wave, are considered and results from the stochastic LLNS solver are compared with theory, when available, and with molecular simulations using a direct simulation Monte Carlo algorithm.