Science.gov

Sample records for accurate numerical integration

  1. On the very accurate numerical evaluation of the Generalized Fermi-Dirac Integrals

    NASA Astrophysics Data System (ADS)

    Mohankumar, N.; Natarajan, A.

    2016-10-01

    We indicate a new and a very accurate algorithm for the evaluation of the Generalized Fermi-Dirac Integral with a relative error less than 10-20. The method involves Double Exponential, Trapezoidal and Gauss-Legendre quadratures. For the residue correction of the Gauss-Legendre scheme, a simple and precise continued fraction algorithm is used.

  2. Numerical Integration

    ERIC Educational Resources Information Center

    Sozio, Gerry

    2009-01-01

    Senior secondary students cover numerical integration techniques in their mathematics courses. In particular, students would be familiar with the "midpoint rule," the elementary "trapezoidal rule" and "Simpson's rule." This article derives these techniques by methods which secondary students may not be familiar with and an approach that…

  3. Rythmos Numerical Integration Package

    2006-09-01

    Rythmos numerically integrates transient differential equations. The differential equations can be explicit or implicit ordinary differential equations ofr formulated as fully implicit differential-algebraic equations. Methods include backward Euler, forward Euler, explicit Runge-Kutta, and implicit BDF at this time. Native support for operator split methods and strict modularity are strong design goals. Forward sensitivity computations will be included in the first release with adjoint sensitivities coming in the near future. Rythmos heavily relies on Thyra formore » linear algebra and nonlinear solver interfaces to AztecOO, Amesos, IFPack, and NOX in Tilinos. Rythmos is specially suited for stiff differential equations and thos applictions where operator split methods have a big advantage, e.g. Computational fluid dynamics, convection-diffusion equations, etc.« less

  4. Rythmos Numerical Integration Package

    SciTech Connect

    Coffey, Todd S.; Bartlett, Roscoe A.

    2006-09-01

    Rythmos numerically integrates transient differential equations. The differential equations can be explicit or implicit ordinary differential equations ofr formulated as fully implicit differential-algebraic equations. Methods include backward Euler, forward Euler, explicit Runge-Kutta, and implicit BDF at this time. Native support for operator split methods and strict modularity are strong design goals. Forward sensitivity computations will be included in the first release with adjoint sensitivities coming in the near future. Rythmos heavily relies on Thyra for linear algebra and nonlinear solver interfaces to AztecOO, Amesos, IFPack, and NOX in Tilinos. Rythmos is specially suited for stiff differential equations and thos applictions where operator split methods have a big advantage, e.g. Computational fluid dynamics, convection-diffusion equations, etc.

  5. Numerical integration of analytic functions

    NASA Astrophysics Data System (ADS)

    Milovanović, Gradimir V.; Tošić, Dobrilo ð.; Albijanić, Miloljub

    2012-09-01

    A weighted generalized N-point Birkhoff-Young quadrature of interpolatory type for numerical integration of analytic functions is considered. Special cases of such quadratures with respect to the generalized Gegenbauer weight function are derived.

  6. Numerical integration of subtraction terms

    NASA Astrophysics Data System (ADS)

    Seth, Satyajit; Weinzierl, Stefan

    2016-06-01

    Numerical approaches to higher-order calculations often employ subtraction terms, both for the real emission and the virtual corrections. These subtraction terms have to be added back. In this paper we show that at NLO the real subtraction terms, the virtual subtraction terms, the integral representations of the field renormalization constants and—in the case of initial-state partons—the integral representation for the collinear counterterm can be grouped together to give finite integrals, which can be evaluated numerically. This is useful for an extension towards next-to-next-to-leading order.

  7. Cuba: Multidimensional numerical integration library

    NASA Astrophysics Data System (ADS)

    Hahn, Thomas

    2016-08-01

    The Cuba library offers four independent routines for multidimensional numerical integration: Vegas, Suave, Divonne, and Cuhre. The four algorithms work by very different methods, and can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Their invocation is very similar, making it easy to cross-check by substituting one method by another. For further safeguarding, the output is supplemented by a chi-square probability which quantifies the reliability of the error estimate.

  8. Accurate Critical Stress Intensity Factor Griffith Crack Theory Measurements by Numerical Techniques

    PubMed Central

    Petersen, Richard C.

    2014-01-01

    Critical stress intensity factor (KIc) has been an approximation for fracture toughness using only load-cell measurements. However, artificial man-made cracks several orders of magnitude longer and wider than natural flaws have required a correction factor term (Y) that can be up to about 3 times the recorded experimental value [1-3]. In fact, over 30 years ago a National Academy of Sciences advisory board stated that empirical KIc testing was of serious concern and further requested that an accurate bulk fracture toughness method be found [4]. Now that fracture toughness can be calculated accurately by numerical integration from the load/deflection curve as resilience, work of fracture (WOF) and strain energy release (SIc) [5, 6], KIc appears to be unnecessary. However, the large body of previous KIc experimental test results found in the literature offer the opportunity for continued meta analysis with other more practical and accurate fracture toughness results using energy methods and numerical integration. Therefore, KIc is derived from the classical Griffith Crack Theory [6] to include SIc as a more accurate term for strain energy release rate (𝒢Ic), along with crack surface energy (γ), crack length (a), modulus (E), applied stress (σ), Y, crack-tip plastic zone defect region (rp) and yield strength (σys) that can all be determined from load and deflection data. Polymer matrix discontinuous quartz fiber-reinforced composites to accentuate toughness differences were prepared for flexural mechanical testing comprising of 3 mm fibers at different volume percentages from 0-54.0 vol% and at 28.2 vol% with different fiber lengths from 0.0-6.0 mm. Results provided a new correction factor and regression analyses between several numerical integration fracture toughness test methods to support KIc results. Further, bulk KIc accurate experimental values are compared with empirical test results found in literature. Also, several fracture toughness mechanisms

  9. Efficient numerical evaluation of Feynman integrals

    NASA Astrophysics Data System (ADS)

    Li, Zhao; Wang, Jian; Yan, Qi-Shu; Zhao, Xiaoran

    2016-03-01

    Feynman loop integrals are a key ingredient for the calculation of higher order radiation effects, and are responsible for reliable and accurate theoretical prediction. We improve the efficiency of numerical integration in sector decomposition by implementing a quasi-Monte Carlo method associated with the CUDA/GPU technique. For demonstration we present the results of several Feynman integrals up to two loops in both Euclidean and physical kinematic regions in comparison with those obtained from FIESTA3. It is shown that both planar and non-planar two-loop master integrals in the physical kinematic region can be evaluated in less than half a minute with accuracy, which makes the direct numerical approach viable for precise investigation of higher order effects in multi-loop processes, e.g. the next-to-leading order QCD effect in Higgs pair production via gluon fusion with a finite top quark mass. Supported by the Natural Science Foundation of China (11305179 11475180), Youth Innovation Promotion Association, CAS, IHEP Innovation (Y4545170Y2), State Key Lab for Electronics and Particle Detectors, Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y4KF061CJ1), Cluster of Excellence Precision Physics, Fundamental Interactions and Structure of Matter (PRISMA-EXC 1098)

  10. Numerical evolution of multiple black holes with accurate initial data

    SciTech Connect

    Galaviz, Pablo; Bruegmann, Bernd; Cao Zhoujian

    2010-07-15

    We present numerical evolutions of three equal-mass black holes using the moving puncture approach. We calculate puncture initial data for three black holes solving the constraint equations by means of a high-order multigrid elliptic solver. Using these initial data, we show the results for three black hole evolutions with sixth-order waveform convergence. We compare results obtained with the BAM and AMSS-NCKU codes with previous results. The approximate analytic solution to the Hamiltonian constraint used in previous simulations of three black holes leads to different dynamics and waveforms. We present some numerical experiments showing the evolution of four black holes and the resulting gravitational waveform.

  11. Fast and Accurate Learning When Making Discrete Numerical Estimates.

    PubMed

    Sanborn, Adam N; Beierholm, Ulrik R

    2016-04-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  12. Fast and Accurate Learning When Making Discrete Numerical Estimates.

    PubMed

    Sanborn, Adam N; Beierholm, Ulrik R

    2016-04-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates.

  13. Fast and Accurate Learning When Making Discrete Numerical Estimates

    PubMed Central

    Sanborn, Adam N.; Beierholm, Ulrik R.

    2016-01-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  14. Accurate numerical simulation of short fiber optical parametric amplifiers.

    PubMed

    Marhic, M E; Rieznik, A A; Kalogerakis, G; Braimiotis, C; Fragnito, H L; Kazovsky, L G

    2008-03-17

    We improve the accuracy of numerical simulations for short fiber optical parametric amplifiers (OPAs). Instead of using the usual coarse-step method, we adopt a model for birefringence and dispersion which uses fine-step variations of the parameters. We also improve the split-step Fourier method by exactly treating the nonlinear ellipse rotation terms. We find that results obtained this way for two-pump OPAs can be significantly different from those obtained by using the usual coarse-step fiber model, and/or neglecting ellipse rotation terms.

  15. Accurate numerical solution of compressible, linear stability equations

    NASA Technical Reports Server (NTRS)

    Malik, M. R.; Chuang, S.; Hussaini, M. Y.

    1982-01-01

    The present investigation is concerned with a fourth order accurate finite difference method and its application to the study of the temporal and spatial stability of the three-dimensional compressible boundary layer flow on a swept wing. This method belongs to the class of compact two-point difference schemes discussed by White (1974) and Keller (1974). The method was apparently first used for solving the two-dimensional boundary layer equations. Attention is given to the governing equations, the solution technique, and the search for eigenvalues. A general purpose subroutine is employed for solving a block tridiagonal system of equations. The computer time can be reduced significantly by exploiting the special structure of two matrices.

  16. Accurate numerical solutions for elastic-plastic models. [LMFBR

    SciTech Connect

    Schreyer, H. L.; Kulak, R. F.; Kramer, J. M.

    1980-03-01

    The accuracy of two integration algorithms is studied for the common engineering condition of a von Mises, isotropic hardening model under plane stress. Errors in stress predictions for given total strain increments are expressed with contour plots of two parameters: an angle in the pi plane and the difference between the exact and computed yield-surface radii. The two methods are the tangent-predictor/radial-return approach and the elastic-predictor/radial-corrector algorithm originally developed by Mendelson. The accuracy of a combined tangent-predictor/radial-corrector algorithm is also investigated.

  17. Second-Order Accurate Projective Integrators for Multiscale Problems

    SciTech Connect

    Lee, S L; Gear, C W

    2005-05-27

    We introduce new projective versions of second-order accurate Runge-Kutta and Adams-Bashforth methods, and demonstrate their use as outer integrators in solving stiff differential systems. An important outcome is that the new outer integrators, when combined with an inner telescopic projective integrator, can result in fully explicit methods with adaptive outer step size selection and solution accuracy comparable to those obtained by implicit integrators. If the stiff differential equations are not directly available, our formulations and stability analysis are general enough to allow the combined outer-inner projective integrators to be applied to black-box legacy codes or perform a coarse-grained time integration of microscopic systems to evolve macroscopic behavior, for example.

  18. Numerical Integration: One Step at a Time

    ERIC Educational Resources Information Center

    Yang, Yajun; Gordon, Sheldon P.

    2016-01-01

    This article looks at the effects that adding a single extra subdivision has on the level of accuracy of some common numerical integration routines. Instead of automatically doubling the number of subdivisions for a numerical integration rule, we investigate what happens with a systematic method of judiciously selecting one extra subdivision for…

  19. Novel accurate and scalable 3-D MT forward solver based on a contracting integral equation method

    NASA Astrophysics Data System (ADS)

    Kruglyakov, M.; Geraskin, A.; Kuvshinov, A.

    2016-11-01

    We present a novel, open source 3-D MT forward solver based on a method of integral equations (IE) with contracting kernel. Special attention in the solver is paid to accurate calculations of Green's functions and their integrals which are cornerstones of any IE solution. The solver supports massive parallelization and is able to deal with highly detailed and contrasting models. We report results of a 3-D numerical experiment aimed at analyzing the accuracy and scalability of the code.

  20. An Integrative Theory of Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert; Lortie-Forgues, Hugues

    2014-01-01

    Understanding of numerical development is growing rapidly, but the volume and diversity of findings can make it difficult to perceive any coherence in the process. The integrative theory of numerical development posits that a coherent theme is present, however--progressive broadening of the set of numbers whose magnitudes can be accurately…

  1. Numerical solution of boundary-integral equations for molecular electrostatics.

    SciTech Connect

    Bardhan, J.; Mathematics and Computer Science; Rush Univ.

    2009-03-07

    Numerous molecular processes, such as ion permeation through channel proteins, are governed by relatively small changes in energetics. As a result, theoretical investigations of these processes require accurate numerical methods. In the present paper, we evaluate the accuracy of two approaches to simulating boundary-integral equations for continuum models of the electrostatics of solvation. The analysis emphasizes boundary-element method simulations of the integral-equation formulation known as the apparent-surface-charge (ASC) method or polarizable-continuum model (PCM). In many numerical implementations of the ASC/PCM model, one forces the integral equation to be satisfied exactly at a set of discrete points on the boundary. We demonstrate in this paper that this approach to discretization, known as point collocation, is significantly less accurate than an alternative approach known as qualocation. Furthermore, the qualocation method offers this improvement in accuracy without increasing simulation time. Numerical examples demonstrate that electrostatic part of the solvation free energy, when calculated using the collocation and qualocation methods, can differ significantly; for a polypeptide, the answers can differ by as much as 10 kcal/mol (approximately 4% of the total electrostatic contribution to solvation). The applicability of the qualocation discretization to other integral-equation formulations is also discussed, and two equivalences between integral-equation methods are derived.

  2. Orbital Advection by Interpolation: A Fast and Accurate Numerical Scheme for Super-Fast MHD Flows

    SciTech Connect

    Johnson, B M; Guan, X; Gammie, F

    2008-04-11

    In numerical models of thin astrophysical disks that use an Eulerian scheme, gas orbits supersonically through a fixed grid. As a result the timestep is sharply limited by the Courant condition. Also, because the mean flow speed with respect to the grid varies with position, the truncation error varies systematically with position. For hydrodynamic (unmagnetized) disks an algorithm called FARGO has been developed that advects the gas along its mean orbit using a separate interpolation substep. This relaxes the constraint imposed by the Courant condition, which now depends only on the peculiar velocity of the gas, and results in a truncation error that is more nearly independent of position. This paper describes a FARGO-like algorithm suitable for evolving magnetized disks. Our method is second order accurate on a smooth flow and preserves {del} {center_dot} B = 0 to machine precision. The main restriction is that B must be discretized on a staggered mesh. We give a detailed description of an implementation of the code and demonstrate that it produces the expected results on linear and nonlinear problems. We also point out how the scheme might be generalized to make the integration of other supersonic/super-fast flows more efficient. Although our scheme reduces the variation of truncation error with position, it does not eliminate it. We show that the residual position dependence leads to characteristic radial variations in the density over long integrations.

  3. Generation of accurate integral surfaces in time-dependent vector fields.

    PubMed

    Garth, Christoph; Krishnan, Han; Tricoche, Xavier; Bobach, Tom; Joy, Kenneth I

    2008-01-01

    We present a novel approach for the direct computation of integral surfaces in time-dependent vector fields. As opposed to previous work, which we analyze in detail, our approach is based on a separation of integral surface computation into two stages: surface approximation and generation of a graphical representation. This allows us to overcome several limitations of existing techniques. We first describe an algorithm for surface integration that approximates a series of time lines using iterative refinement and computes a skeleton of the integral surface. In a second step, we generate a well-conditioned triangulation. Our approach allows a highly accurate treatment of very large time-varying vector fields in an efficient, streaming fashion. We examine the properties of the presented methods on several example datasets and perform a numerical study of its correctness and accuracy. Finally, we investigate some visualization aspects of integral surfaces. PMID:18988990

  4. Accurate object tracking system by integrating texture and depth cues

    NASA Astrophysics Data System (ADS)

    Chen, Ju-Chin; Lin, Yu-Hang

    2016-03-01

    A robust object tracking system that is invariant to object appearance variations and background clutter is proposed. Multiple instance learning with a boosting algorithm is applied to select discriminant texture information between the object and background data. Additionally, depth information, which is important to distinguish the object from a complicated background, is integrated. We propose two depth-based models that can compensate texture information to cope with both appearance variants and background clutter. Moreover, in order to reduce the risk of drifting problem increased for the textureless depth templates, an update mechanism is proposed to select more precise tracking results to avoid incorrect model updates. In the experiments, the robustness of the proposed system is evaluated and quantitative results are provided for performance analysis. Experimental results show that the proposed system can provide the best success rate and has more accurate tracking results than other well-known algorithms.

  5. Highly Parallel, High-Precision Numerical Integration

    SciTech Connect

    Bailey, David H.; Borwein, Jonathan M.

    2005-04-22

    This paper describes a scheme for rapidly computing numerical values of definite integrals to very high accuracy, ranging from ordinary machine precision to hundreds or thousands of digits, even for functions with singularities or infinite derivatives at endpoints. Such a scheme is of interest not only in computational physics and computational chemistry, but also in experimental mathematics, where high-precision numerical values of definite integrals can be used to numerically discover new identities. This paper discusses techniques for a parallel implementation of this scheme, then presents performance results for 1-D and 2-D test suites. Results are also given for a certain problem from mathematical physics, which features a difficult singularity, confirming a conjecture to 20,000 digit accuracy. The performance rate for this latter calculation on 1024 CPUs is 690 Gflop/s. We believe that this and one other 20,000-digit integral evaluation that we report are the highest-precision non-trivial numerical integrations performed to date.

  6. Numerical multi-loop integrals and applications

    NASA Astrophysics Data System (ADS)

    Freitas, A.

    2016-09-01

    Higher-order radiative corrections play an important role in precision studies of the electroweak and Higgs sector, as well as for the detailed understanding of large backgrounds to new physics searches. For corrections beyond the one-loop level and involving many independent mass and momentum scales, it is in general not possible to find analytic results, so that one needs to resort to numerical methods instead. This article presents an overview of a variety of numerical loop integration techniques, highlighting their range of applicability, suitability for automatization, and numerical precision and stability. In a second part of this article, the application of numerical loop integration methods in the area of electroweak precision tests is illustrated. Numerical methods were essential for obtaining full two-loop predictions for the most important precision observables within the Standard Model. The theoretical foundations for these corrections will be described in some detail, including aspects of the renormalization, resummation of leading log contributions, and the evaluation of the theory uncertainty from missing higher orders.

  7. Numerical methods for engine-airframe integration

    SciTech Connect

    Murthy, S.N.B.; Paynter, G.C.

    1986-01-01

    Various papers on numerical methods for engine-airframe integration are presented. The individual topics considered include: scientific computing environment for the 1980s, overview of prediction of complex turbulent flows, numerical solutions of the compressible Navier-Stokes equations, elements of computational engine/airframe integrations, computational requirements for efficient engine installation, application of CAE and CFD techniques to complete tactical missile design, CFD applications to engine/airframe integration, and application of a second-generation low-order panel methods to powerplant installation studies. Also addressed are: three-dimensional flow analysis of turboprop inlet and nacelle configurations, application of computational methods to the design of large turbofan engine nacelles, comparison of full potential and Euler solution algorithms for aeropropulsive flow field computations, subsonic/transonic, supersonic nozzle flows and nozzle integration, subsonic/transonic prediction capabilities for nozzle/afterbody configurations, three-dimensional viscous design methodology of supersonic inlet systems for advanced technology aircraft, and a user's technology assessment.

  8. Numerical integration for ab initio many-electron self energy calculations within the GW approximation

    NASA Astrophysics Data System (ADS)

    Liu, Fang; Lin, Lin; Vigil-Fowler, Derek; Lischner, Johannes; Kemper, Alexander F.; Sharifzadeh, Sahar; da Jornada, Felipe H.; Deslippe, Jack; Yang, Chao; Neaton, Jeffrey B.; Louie, Steven G.

    2015-04-01

    We present a numerical integration scheme for evaluating the convolution of a Green's function with a screened Coulomb potential on the real axis in the GW approximation of the self energy. Our scheme takes the zero broadening limit in Green's function first, replaces the numerator of the integrand with a piecewise polynomial approximation, and performs principal value integration on subintervals analytically. We give the error bound of our numerical integration scheme and show by numerical examples that it is more reliable and accurate than the standard quadrature rules such as the composite trapezoidal rule. We also discuss the benefit of using different self energy expressions to perform the numerical convolution at different frequencies.

  9. Numerical analysis of Weyl's method for integrating boundary layer equations

    NASA Technical Reports Server (NTRS)

    Najfeld, I.

    1982-01-01

    A fast method for accurate numerical integration of Blasius equation is proposed. It is based on the limit interchange in Weyl's fixed point method formulated as an iterated limit process. Each inner limit represents convergence to a discrete solution. It is shown that the error in a discrete solution admits asymptotic expansion in even powers of step size. An extrapolation process is set up to operate on a sequence of discrete solutions to reach the outer limit. Finally, this method is extended to related boundary layer equations.

  10. Recommendations for Achieving Accurate Numerical Simulation of Tip Clearance Flows in Transonic Compressor Rotors

    NASA Technical Reports Server (NTRS)

    VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R,; Hathaway, Michael D.; Okiishi, Theodore H.

    2000-01-01

    The tip clearance flows of transonic compressor rotors are important because they have a significant impact on rotor and stage performance. While numerical simulations of these flows are quite sophisticated. they are seldom verified through rigorous comparisons of numerical and measured data because these kinds of measurements are rare in the detail necessary to be useful in high-speed machines. In this paper we compare measured tip clearance flow details (e.g. trajectory and radial extent) with corresponding data obtained from a numerical simulation. Recommendations for achieving accurate numerical simulation of tip clearance flows are presented based on this comparison. Laser Doppler Velocimeter (LDV) measurements acquired in a transonic compressor rotor, NASA Rotor 35, are used. The tip clearance flow field of this transonic rotor was simulated using a Navier-Stokes turbomachinery solver that incorporates an advanced k-epsilon turbulence model derived for flows that are not in local equilibrium. Comparison between measured and simulated results indicates that simulation accuracy is primarily dependent upon the ability of the numerical code to resolve important details of a wall-bounded shear layer formed by the relative motion between the over-tip leakage flow and the shroud wall. A simple method is presented for determining the strength of this shear layer.

  11. Towards numerically accurate many-body perturbation theory: Short-range correlation effects

    SciTech Connect

    Gulans, Andris

    2014-10-28

    The example of the uniform electron gas is used for showing that the short-range electron correlation is difficult to handle numerically, while it noticeably contributes to the self-energy. Nonetheless, in condensed-matter applications studied with advanced methods, such as the GW and random-phase approximations, it is common to neglect contributions due to high-momentum (large q) transfers. Then, the short-range correlation is poorly described, which leads to inaccurate correlation energies and quasiparticle spectra. To circumvent this problem, an accurate extrapolation scheme is proposed. It is based on an analytical derivation for the uniform electron gas presented in this paper, and it provides an explanation why accurate GW quasiparticle spectra are easy to obtain for some compounds and very difficult for others.

  12. Final Report for "Accurate Numerical Models of the Secondary Electron Yield from Grazing-incidence Collisions".

    SciTech Connect

    Seth A Veitzer

    2008-10-21

    Effects of stray electrons are a main factor limiting performance of many accelerators. Because heavy-ion fusion (HIF) accelerators will operate in regimes of higher current and with walls much closer to the beam than accelerators operating today, stray electrons might have a large, detrimental effect on the performance of an HIF accelerator. A primary source of stray electrons is electrons generated when halo ions strike the beam pipe walls. There is some research on these types of secondary electrons for the HIF community to draw upon, but this work is missing one crucial ingredient: the effect of grazing incidence. The overall goal of this project was to develop the numerical tools necessary to accurately model the effect of grazing incidence on the behavior of halo ions in a HIF accelerator, and further, to provide accurate models of heavy ion stopping powers with applications to ICF, WDM, and HEDP experiments.

  13. Numerical integration of orbits of planetary satellites.

    NASA Astrophysics Data System (ADS)

    Hadjifotinou, K. G.; Harper, D.

    1995-11-01

    The 10th-order Gauss-Jackson backward difference numerical integration method and the Runge-Kutta-Nystroem RKN12(10)17M method were applied to the equations of motion and variational equations of the Saturnian satellite system. We investigated the effect of step-size on the stability of the Gauss-Jackson method in the two distinct cases arising from the inclusion or exclusion of the corrector cycle in the integration of the variational equations. In the predictor-only case, we found that instability occurred when the step-size was greater than approximately 1/76 of the orbital period of the innermost satellite. In the predictor-corrector case, no such instability was observed, but larger step-sizes yield significant loss in accuracy. By contrast, the investigation of the Runge-Kutta-Nystroem method showed that it allows the use of much larger step-sizes and can still obtain high-accuracy results, thus making evident the superiority of the method for the integration of planetary satellite systems.

  14. Spectrally accurate numerical solution of the single-particle Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Batcho, P. F.

    1998-06-01

    We have formulated a three-dimensional fully numerical (i.e., chemical basis-set free) method and applied it to the solution of the single-particle Schrödinger equation. The numerical method combines the rapid ``exponential'' convergence rates of spectral methods with the geometric flexibility of finite-element methods and can be viewed as an extension of the spectral element method. Singularities associated with multicenter systems are efficiently integrated by a Duffy transformation and the discrete operator is formulated by a variational statement. The method is applicable to molecular modeling for quantum chemical calculations on polyatomic systems. The complete system is shown to be efficiently inverted by the preconditioned conjugate gradient method and exponential convergence rates in numerical approximations are demonstrated for suitable benchmark problems including the hydrogenlike orbitals of nitrogen.

  15. Efficient and accurate numerical methods for the Klein-Gordon-Schroedinger equations

    SciTech Connect

    Bao, Weizhu . E-mail: bao@math.nus.edu.sg; Yang, Li . E-mail: yangli@nus.edu.sg

    2007-08-10

    In this paper, we present efficient, unconditionally stable and accurate numerical methods for approximations of the Klein-Gordon-Schroedinger (KGS) equations with/without damping terms. The key features of our methods are based on: (i) the application of a time-splitting spectral discretization for a Schroedinger-type equation in KGS (ii) the utilization of Fourier pseudospectral discretization for spatial derivatives in the Klein-Gordon equation in KGS (iii) the adoption of solving the ordinary differential equations (ODEs) in phase space analytically under appropriate chosen transmission conditions between different time intervals or applying Crank-Nicolson/leap-frog for linear/nonlinear terms for time derivatives. The numerical methods are either explicit or implicit but can be solved explicitly, unconditionally stable, and of spectral accuracy in space and second-order accuracy in time. Moreover, they are time reversible and time transverse invariant when there is no damping terms in KGS, conserve (or keep the same decay rate of) the wave energy as that in KGS without (or with a linear) damping term, keep the same dynamics of the mean value of the meson field, and give exact results for the plane-wave solution. Extensive numerical tests are presented to confirm the above properties of our numerical methods for KGS. Finally, the methods are applied to study solitary-wave collisions in one dimension (1D), as well as dynamics of a 2D problem in KGS.

  16. Keyword Search over Data Service Integration for Accurate Results

    NASA Astrophysics Data System (ADS)

    Zemleris, Vidmantas; Kuznetsov, Valentin; Gwadera, Robert

    2014-06-01

    Virtual Data Integration provides a coherent interface for querying heterogeneous data sources (e.g., web services, proprietary systems) with minimum upfront effort. Still, this requires its users to learn a new query language and to get acquainted with data organization which may pose problems even to proficient users. We present a keyword search system, which proposes a ranked list of structured queries along with their explanations. It operates mainly on the metadata, such as the constraints on inputs accepted by services. It was developed as an integral part of the CMS data discovery service, and is currently available as open source.

  17. Accurate Anharmonic IR Spectra from Integrated Cc/dft Approach

    NASA Astrophysics Data System (ADS)

    Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien; Carnimeo, Ivan; Puzzarini, Cristina

    2014-06-01

    The recent implementation of the computation of infrared (IR) intensities beyond the double harmonic approximation [1] paved the route to routine calculations of infrared spectra for a wide set of molecular systems. Contrary to common beliefs, second-order perturbation theory is able to deliver results of high accuracy provided that anharmonic resonances are properly managed [1,2]. It has been already shown for several small closed- and open shell molecular systems that the differences between coupled cluster (CC) and DFT anharmonic wavenumbers are mainly due to the harmonic terms, paving the route to introduce effective yet accurate hybrid CC/DFT schemes [2]. In this work we present that hybrid CC/DFT models can be applied also to the IR intensities leading to the simulation of highly accurate fully anharmonic IR spectra for medium-size molecules, including ones of atmospheric interest, showing in all cases good agreement with experiment even in the spectral ranges where non-fundamental transitions are predominant[3]. [1] J. Bloino and V. Barone, J. Chem. Phys. 136, 124108 (2012) [2] V. Barone, M. Biczysko, J. Bloino, Phys. Chem. Chem. Phys., 16, 1759-1787 (2014) [3] I. Carnimeo, C. Puzzarini, N. Tasinato, P. Stoppa, A. P. Charmet, M. Biczysko, C. Cappelli and V. Barone, J. Chem. Phys., 139, 074310 (2013)

  18. Method for the numerical integration of equations of perturbed satellite motion in problems of space geodesy

    NASA Astrophysics Data System (ADS)

    Plakhov, Iu. V.; Mytsenko, A. V.; Shel'Pov, V. A.

    A numerical integration method is developed that is more accurate than Everhart's (1974) implicit single-sequence approach for integrating orbits. This method can be used to solve problems of space geodesy based on the use of highly precise laser observations.

  19. Numerical integration of asymptotic solutions of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Thurston, Gaylen A.

    1989-01-01

    Classical asymptotic analysis of ordinary differential equations derives approximate solutions that are numerically stable. However, the analysis also leads to tedious expansions in powers of the relevant parameter for a particular problem. The expansions are replaced with integrals that can be evaluated by numerical integration. The resulting numerical solutions retain the linear independence that is the main advantage of asymptotic solutions. Examples, including the Falkner-Skan equation from laminar boundary layer theory, illustrate the method of asymptotic analysis with numerical integration.

  20. An improvement in the numerical integration procedure used in the NASA Marshall engineering thermosphere model

    NASA Technical Reports Server (NTRS)

    Hickey, Michael Philip

    1988-01-01

    A proposed replacement scheme for the integration of the barometric and diffusion equations in the NASA Marshall Engineering Thermosphere (MET) model is presented. This proposed integration scheme is based on Gaussian Quadrature. Extensive numerical testing reveals it to be faster, more accurate and more reliable than the present integration scheme (a modified form of Simpson's Rule) used in the MET model. Numerous graphical examples are provided, along with a listing of a modified form of the MET model in which subroutine INTEGRATE (using Simpson's Rule) is replaced by subroutine GAUSS (which uses Gaussian Quadrature). It is recommended that the Gaussian Quadrature integration scheme, as used here, be used in the MET model.

  1. A novel numerical technique to obtain an accurate solution to the Thomas-Fermi equation

    NASA Astrophysics Data System (ADS)

    Parand, Kourosh; Yousefi, Hossein; Delkhosh, Mehdi; Ghaderi, Amin

    2016-07-01

    In this paper, a new algorithm based on the fractional order of rational Euler functions (FRE) is introduced to study the Thomas-Fermi (TF) model which is a nonlinear singular ordinary differential equation on a semi-infinite interval. This problem, using the quasilinearization method (QLM), converts to the sequence of linear ordinary differential equations to obtain the solution. For the first time, the rational Euler (RE) and the FRE have been made based on Euler polynomials. In addition, the equation will be solved on a semi-infinite domain without truncating it to a finite domain by taking FRE as basic functions for the collocation method. This method reduces the solution of this problem to the solution of a system of algebraic equations. We demonstrated that the new proposed algorithm is efficient for obtaining the value of y'(0) , y(x) and y'(x) . Comparison with some numerical and analytical solutions shows that the present solution is highly accurate.

  2. Recommendations for accurate numerical blood flow simulations of stented intracranial aneurysms.

    PubMed

    Janiga, Gábor; Berg, Philipp; Beuing, Oliver; Neugebauer, Mathias; Gasteiger, Rocco; Preim, Bernhard; Rose, Georg; Skalej, Martin; Thévenin, Dominique

    2013-06-01

    The number of scientific publications dealing with stented intracranial aneurysms is rapidly increasing. Powerful computational facilities are now available; an accurate computational modeling of hemodynamics in patient-specific configurations is, however, still being sought. Furthermore, there is still no general agreement on the quantities that should be computed and on the most adequate analysis for intervention support. In this article, the accurate representation of patient geometry is first discussed, involving successive improvements. Concerning the second step, the mesh required for the numerical simulation is especially challenging when deploying a stent with very fine wire structures. Third, the description of the fluid properties is a major challenge. Finally, a founded quantitative analysis of the simulation results is obviously needed to support interventional decisions. In the present work, an attempt has been made to review the most important steps for a high-quality computational fluid dynamics computation of virtually stented intracranial aneurysms. In consequence, this leads to concrete recommendations, whereby the obtained results are not discussed for their medical relevance but for the evaluation of their quality. This investigation might hopefully be helpful for further studies considering stent deployment in patient-specific geometries, in particular regarding the generation of the most appropriate computational model. PMID:23729530

  3. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    PubMed

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  4. Keeping the edge: an accurate numerical method to solve the stream power law

    NASA Astrophysics Data System (ADS)

    Campforts, B.; Govers, G.

    2015-12-01

    Bedrock rivers set the base level of surrounding hill slopes and mediate the dynamic interplay between mountain building and denudation. The propensity of rivers to preserve pulses of increased tectonic uplift also allows to reconstruct long term uplift histories from longitudinal river profiles. An accurate reconstruction of river profile development at different timescales is therefore essential. Long term river development is typically modeled by means of the stream power law. Under specific conditions this equation can be solved analytically but numerical Finite Difference Methods (FDMs) are most frequently used. Nonetheless, FDMs suffer from numerical smearing, especially at knickpoint zones which are key to understand transient landscapes. Here, we solve the stream power law by means of a Finite Volume Method (FVM) which is Total Variation Diminishing (TVD). Total volume methods are designed to simulate sharp discontinuities making them very suitable to model river incision. In contrast to FDMs, the TVD_FVM is well capable of preserving knickpoints as illustrated for the fast propagating Niagara falls. Moreover, we show that the TVD_FVM performs much better when reconstructing uplift at timescales exceeding 100 Myr, using Eastern Australia as an example. Finally, uncertainty associated with parameter calibration is dramatically reduced when the TVD_FVM is applied. Therefore, the use of a TVD_FVM to understand long term landscape evolution is an important addition to the toolbox at the disposition of geomorphologists.

  5. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    PubMed

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases). PMID:26430979

  6. PolyPole-1: An accurate numerical algorithm for intra-granular fission gas release

    NASA Astrophysics Data System (ADS)

    Pizzocri, D.; Rabiti, C.; Luzzi, L.; Barani, T.; Van Uffelen, P.; Pastore, G.

    2016-09-01

    The transport of fission gas from within the fuel grains to the grain boundaries (intra-granular fission gas release) is a fundamental controlling mechanism of fission gas release and gaseous swelling in nuclear fuel. Hence, accurate numerical solution of the corresponding mathematical problem needs to be included in fission gas behaviour models used in fuel performance codes. Under the assumption of equilibrium between trapping and resolution, the process can be described mathematically by a single diffusion equation for the gas atom concentration in a grain. In this paper, we propose a new numerical algorithm (PolyPole-1) to efficiently solve the fission gas diffusion equation in time-varying conditions. The PolyPole-1 algorithm is based on the analytic modal solution of the diffusion equation for constant conditions, combined with polynomial corrective terms that embody the information on the deviation from constant conditions. The new algorithm is verified by comparing the results to a finite difference solution over a large number of randomly generated operation histories. Furthermore, comparison to state-of-the-art algorithms used in fuel performance codes demonstrates that the accuracy of PolyPole-1 is superior to other algorithms, with similar computational effort. Finally, the concept of PolyPole-1 may be extended to the solution of the general problem of intra-granular fission gas diffusion during non-equilibrium trapping and resolution, which will be the subject of future work.

  7. Numerical Computation of a Continuous-thrust State Transition Matrix Incorporating Accurate Hardware and Ephemeris Models

    NASA Technical Reports Server (NTRS)

    Ellison, Donald; Conway, Bruce; Englander, Jacob

    2015-01-01

    A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.

  8. Towards an accurate understanding of UHMWPE visco-dynamic behaviour for numerical modelling of implants.

    PubMed

    Quinci, Federico; Dressler, Matthew; Strickland, Anthony M; Limbert, Georges

    2014-04-01

    Considerable progress has been made in understanding implant wear and developing numerical models to predict wear for new orthopaedic devices. However any model of wear could be improved through a more accurate representation of the biomaterial mechanics, including time-varying dynamic and inelastic behaviour such as viscosity and plastic deformation. In particular, most computational models of wear of UHMWPE implement a time-invariant version of Archard's law that links the volume of worn material to the contact pressure between the metal implant and the polymeric tibial insert. During in-vivo conditions, however, the contact area is a time-varying quantity and is therefore dependent upon the dynamic deformation response of the material. From this observation one can conclude that creep deformations of UHMWPE may be very important to consider when conducting computational wear analyses, in stark contrast to what can be found in the literature. In this study, different numerical modelling techniques are compared with experimental creep testing on a unicondylar knee replacement system in a physiologically representative context. Linear elastic, plastic and time-varying visco-dynamic models are benchmarked using literature data to predict contact deformations, pressures and areas. The aim of this study is to elucidate the contributions of viscoelastic and plastic effects on these surface quantities. It is concluded that creep deformations have a significant effect on the contact pressure measured (experiment) and calculated (computational models) at the surface of the UHMWPE unicondylar insert. The use of a purely elastoplastic constitutive model for UHMWPE lead to compressive deformations of the insert which are much smaller than those predicted by a creep-capturing viscoelastic model (and those measured experimentally). This shows again the importance of including creep behaviour into a constitutive model in order to predict the right level of surface deformation

  9. Towards an accurate understanding of UHMWPE visco-dynamic behaviour for numerical modelling of implants.

    PubMed

    Quinci, Federico; Dressler, Matthew; Strickland, Anthony M; Limbert, Georges

    2014-04-01

    Considerable progress has been made in understanding implant wear and developing numerical models to predict wear for new orthopaedic devices. However any model of wear could be improved through a more accurate representation of the biomaterial mechanics, including time-varying dynamic and inelastic behaviour such as viscosity and plastic deformation. In particular, most computational models of wear of UHMWPE implement a time-invariant version of Archard's law that links the volume of worn material to the contact pressure between the metal implant and the polymeric tibial insert. During in-vivo conditions, however, the contact area is a time-varying quantity and is therefore dependent upon the dynamic deformation response of the material. From this observation one can conclude that creep deformations of UHMWPE may be very important to consider when conducting computational wear analyses, in stark contrast to what can be found in the literature. In this study, different numerical modelling techniques are compared with experimental creep testing on a unicondylar knee replacement system in a physiologically representative context. Linear elastic, plastic and time-varying visco-dynamic models are benchmarked using literature data to predict contact deformations, pressures and areas. The aim of this study is to elucidate the contributions of viscoelastic and plastic effects on these surface quantities. It is concluded that creep deformations have a significant effect on the contact pressure measured (experiment) and calculated (computational models) at the surface of the UHMWPE unicondylar insert. The use of a purely elastoplastic constitutive model for UHMWPE lead to compressive deformations of the insert which are much smaller than those predicted by a creep-capturing viscoelastic model (and those measured experimentally). This shows again the importance of including creep behaviour into a constitutive model in order to predict the right level of surface deformation

  10. Numerical solution of integral-algebraic equations for multistep methods

    NASA Astrophysics Data System (ADS)

    Budnikova, O. S.; Bulatov, M. V.

    2012-05-01

    Systems of Volterra linear integral equations with identically singular matrices in the principal part (called integral-algebraic equations) are examined. Multistep methods for the numerical solution of a selected class of such systems are proposed and justified.

  11. Accurate and efficient Nyström volume integral equation method for the Maxwell equations for multiple 3-D scatterers

    NASA Astrophysics Data System (ADS)

    Chen, Duan; Cai, Wei; Zinser, Brian; Cho, Min Hyung

    2016-09-01

    In this paper, we develop an accurate and efficient Nyström volume integral equation (VIE) method for the Maxwell equations for a large number of 3-D scatterers. The Cauchy Principal Values that arise from the VIE are computed accurately using a finite size exclusion volume together with explicit correction integrals consisting of removable singularities. Also, the hyper-singular integrals are computed using interpolated quadrature formulae with tensor-product quadrature nodes for cubes, spheres and cylinders, that are frequently encountered in the design of meta-materials. The resulting Nyström VIE method is shown to have high accuracy with a small number of collocation points and demonstrates p-convergence for computing the electromagnetic scattering of these objects. Numerical calculations of multiple scatterers of cubic, spherical, and cylindrical shapes validate the efficiency and accuracy of the proposed method.

  12. Quantum Calisthenics: Gaussians, The Path Integral and Guided Numerical Approximations

    SciTech Connect

    Weinstein, Marvin; /SLAC

    2009-02-12

    It is apparent to anyone who thinks about it that, to a large degree, the basic concepts of Newtonian physics are quite intuitive, but quantum mechanics is not. My purpose in this talk is to introduce you to a new, much more intuitive way to understand how quantum mechanics works. I begin with an incredibly easy way to derive the time evolution of a Gaussian wave-packet for the case free and harmonic motion without any need to know the eigenstates of the Hamiltonian. This discussion is completely analytic and I will later use it to relate the solution for the behavior of the Gaussian packet to the Feynman path-integral and stationary phase approximation. It will be clear that using the information about the evolution of the Gaussian in this way goes far beyond what the stationary phase approximation tells us. Next, I introduce the concept of the bucket brigade approach to dealing with problems that cannot be handled totally analytically. This approach combines the intuition obtained in the initial discussion, as well as the intuition obtained from the path-integral, with simple numerical tools. My goal is to show that, for any specific process, there is a simple Hilbert space interpretation of the stationary phase approximation. I will then argue that, from the point of view of numerical approximations, the trajectory obtained from my generalization of the stationary phase approximation specifies that subspace of the full Hilbert space that is needed to compute the time evolution of the particular state under the full Hamiltonian. The prescription I will give is totally non-perturbative and we will see, by the grace of Maple animations computed for the case of the anharmonic oscillator Hamiltonian, that this approach allows surprisingly accurate computations to be performed with very little work. I think of this approach to the path-integral as defining what I call a guided numerical approximation scheme. After the discussion of the anharmonic oscillator I will

  13. Numerical quadrature methods for integrals of singular periodic functions and their application to singular and weakly singular integral equations

    NASA Technical Reports Server (NTRS)

    Sidi, A.; Israeli, M.

    1986-01-01

    High accuracy numerical quadrature methods for integrals of singular periodic functions are proposed. These methods are based on the appropriate Euler-Maclaurin expansions of trapezoidal rule approximations and their extrapolations. They are used to obtain accurate quadrature methods for the solution of singular and weakly singular Fredholm integral equations. Such periodic equations are used in the solution of planar elliptic boundary value problems, elasticity, potential theory, conformal mapping, boundary element methods, free surface flows, etc. The use of the quadrature methods is demonstrated with numerical examples.

  14. A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction

    NASA Technical Reports Server (NTRS)

    Bockelie, Michael J.; Eiseman, Peter R.

    1990-01-01

    A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.

  15. Accurate numerical forward model for optimal retracking of SIRAL2 SAR echoes over open ocean

    NASA Astrophysics Data System (ADS)

    Phalippou, L.; Demeestere, F.

    2011-12-01

    The SAR mode of SIRAL-2 on board Cryosat-2 has been designed to measure primarily sea-ice and continental ice (Wingham et al. 2005). In 2005, K. Raney (KR, 2005) pointed out the improvements brought by SAR altimeter for open ocean. KR results were mostly based on 'rule of thumb' considerations on speckle noise reduction due to the higher PRF and to speckle decorrelation after SAR processing. In 2007, Phalippou and Enjolras (PE,2007) provided the theoretical background for optimal retracking of SAR echoes over ocean with a focus on the forward modelling of the power-waveforms. The accuracies of geophysical parameters (range, significant wave heights, and backscattering coefficient) retrieved from SAR altimeter data were derived accounting for SAR echo shape and speckle noise accurate modelling. The step forward to optimal retracking using numerical forward model (NFM) was also pointed out. NFM of the power waveform avoids analytical approximation, a warranty to minimise the geophysical dependent biases in the retrieval. NFM have been used for many years, in operational meteorology in particular, for retrieving temperature and humidity profiles from IR and microwave radiometers as the radiative transfer function is complex (Eyre, 1989). So far this technique was not used in the field of ocean conventional altimetry as analytical models (e.g. Brown's model for instance) were found to give sufficient accuracy. However, although NFM seems desirable even for conventional nadir altimetry, it becomes inevitable if one wish to process SAR altimeter data as the transfer function is too complex to be approximated by a simple analytical function. This was clearly demonstrated in PE 2007. The paper describes the background to SAR data retracking over open ocean. Since PE 2007 improvements have been brought to the forward model and it is shown that the altimeter on-ground and in flight characterisation (e.g antenna pattern range impulse response, azimuth impulse response

  16. Physical and Numerical Model Studies of Cross-flow Turbines Towards Accurate Parameterization in Array Simulations

    NASA Astrophysics Data System (ADS)

    Wosnik, M.; Bachant, P.

    2014-12-01

    Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of

  17. Numerical integration of ordinary differential equations of various orders

    NASA Technical Reports Server (NTRS)

    Gear, C. W.

    1969-01-01

    Report describes techniques for the numerical integration of differential equations of various orders. Modified multistep predictor-corrector methods for general initial-value problems are discussed and new methods are introduced.

  18. A Numerical Study of Hypersonic Forebody/Inlet Integration Problem

    NASA Technical Reports Server (NTRS)

    Kumar, Ajay

    1991-01-01

    A numerical study of hypersonic forebody/inlet integration problem is presented in the form of the view-graphs. The following topics are covered: physical/chemical modeling; solution procedure; flow conditions; mass flow rate at inlet face; heating and skin friction loads; 3-D forebogy/inlet integration model; and sensitivity studies.

  19. Numerical Integration of Elastoviscoplasticity Model with Stiff Hardening and Softening

    SciTech Connect

    Vorobiev, O.Y.; Lomov, I.N; Glenn, L.A.; Rubin, M.B.

    2000-02-01

    The constitutive equations for viscoplasticity typically are stiff differential equations and require special numerical methods to integrate them efficiently. The objective of this paper is to propose a class of rate-dependent viscoplastic constitutive equations which can be integrated by an efficient explicit scheme that includes the first order effect of pressure and plastic strain hardening.

  20. Third-order-accurate numerical methods for efficient, large time-step solutions of mixed linear and nonlinear problems

    SciTech Connect

    Cobb, J.W.

    1995-02-01

    There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.

  1. AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)

    EPA Science Inventory

    Abstract

    A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...

  2. Numerical solution of optimal control problems using multiple-interval integral Gegenbauer pseudospectral methods

    NASA Astrophysics Data System (ADS)

    Tang, Xiaojun

    2016-04-01

    The main purpose of this work is to provide multiple-interval integral Gegenbauer pseudospectral methods for solving optimal control problems. The latest developed single-interval integral Gauss/(flipped Radau) pseudospectral methods can be viewed as special cases of the proposed methods. We present an exact and efficient approach to compute the mesh pseudospectral integration matrices for the Gegenbauer-Gauss and flipped Gegenbauer-Gauss-Radau points. Numerical results on benchmark optimal control problems confirm the ability of the proposed methods to obtain highly accurate solutions.

  3. Accurate path integration in continuous attractor network models of grid cells.

    PubMed

    Burak, Yoram; Fiete, Ila R

    2009-02-01

    Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of approximately 10-100 meters and approximately 1-10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other. PMID:19229307

  4. Numerical integration of discontinuities on arbitrary domains based on moment fitting

    NASA Astrophysics Data System (ADS)

    Joulaian, Meysam; Hubrich, Simeon; Düster, Alexander

    2016-06-01

    Discretization methods based on meshes that do not conform to the geometry of the problem under consideration require special treatment when it comes to the integration of finite elements that are broken by the boundary or internal interfaces. To this end, we propose a numerical approach suitable for integrating broken elements with a low number of integration points. In this method, which is based on the moment fitting approach, an individual quadrature rule is set up for each cut element. The approach requires a B-rep representation of the broken element, which can be either achieved by processing a triangulated surface obtained from a CAD software or by taking advantage of a voxel model resulting from computed tomography. The numerical examples presented in this paper reveal that the proposed method delivers for a wide variety of geometrical situations very accurate results and requires a rather low number of integration points.

  5. Numerical integration of systems of delay differential-algebraic equations

    NASA Astrophysics Data System (ADS)

    Kuznetsov, E. B.; Mikryukov, V. N.

    2007-01-01

    The numerical solution of the initial value problem for a system of delay differential-algebraic equations is examined in the framework of the parametric continuation method. Necessary and sufficient conditions are obtained for transforming this problem to the best argument, which ensures the best condition for the corresponding system of continuation equations. The best argument is the arc length along the integral curve of the problem. Algorithms and programs based on the continuous and discrete continuation methods are developed for the numerical integration of this problem. The efficiency of the suggested transformation is demonstrated using test examples.

  6. Numerical Methodology for Coupled Time-Accurate Simulations of Primary and Secondary Flowpaths in Gas Turbines

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.

    2006-01-01

    Detailed information of the flow-fields in the secondary flowpaths and their interaction with the primary flows in gas turbine engines is necessary for successful designs with optimized secondary flow streams. Present work is focused on the development of a simulation methodology for coupled time-accurate solutions of the two flowpaths. The secondary flowstream is treated using SCISEAL, an unstructured adaptive Cartesian grid code developed for secondary flows and seals, while the mainpath flow is solved using TURBO, a density based code with capability of resolving rotor-stator interaction in multi-stage machines. An interface is being tested that links the two codes at the rim seal to allow data exchange between the two codes for parallel, coupled execution. A description of the coupling methodology and the current status of the interface development is presented. Representative steady-state solutions of the secondary flow in the UTRC HP Rig disc cavity are also presented.

  7. Towards more accurate numerical modeling of impedance based high frequency harmonic vibration

    NASA Astrophysics Data System (ADS)

    Lim, Yee Yan; Kiong Soh, Chee

    2014-03-01

    The application of smart materials in various fields of engineering has recently become increasingly popular. For instance, the high frequency based electromechanical impedance (EMI) technique employing smart piezoelectric materials is found to be versatile in structural health monitoring (SHM). Thus far, considerable efforts have been made to study and improve the technique. Various theoretical models of the EMI technique have been proposed in an attempt to better understand its behavior. So far, the three-dimensional (3D) coupled field finite element (FE) model has proved to be the most accurate. However, large discrepancies between the results of the FE model and experimental tests, especially in terms of the slope and magnitude of the admittance signatures, continue to exist and are yet to be resolved. This paper presents a series of parametric studies using the 3D coupled field finite element method (FEM) on all properties of materials involved in the lead zirconate titanate (PZT) structure interaction of the EMI technique, to investigate their effect on the admittance signatures acquired. FE model updating is then performed by adjusting the parameters to match the experimental results. One of the main reasons for the lower accuracy, especially in terms of magnitude and slope, of previous FE models is the difficulty in determining the damping related coefficients and the stiffness of the bonding layer. In this study, using the hysteretic damping model in place of Rayleigh damping, which is used by most researchers in this field, and updated bonding stiffness, an improved and more accurate FE model is achieved. The results of this paper are expected to be useful for future study of the subject area in terms of research and application, such as modeling, design and optimization.

  8. Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.

  9. Fast and accurate numerical method for predicting gas chromatography retention time.

    PubMed

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-08-01

    Predictive modeling for gas chromatography compound retention depends on the retention factor (ki) and on the flow of the mobile phase. Thus, different approaches for determining an analyte ki in column chromatography have been developed. The main one is based on the thermodynamic properties of the component and on the characteristics of the stationary phase. These models can be used to estimate the parameters and to optimize the programming of temperatures, in gas chromatography, for the separation of compounds. Different authors have proposed the use of numerical methods for solving these models, but these methods demand greater computational time. Hence, a new method for solving the predictive modeling of analyte retention time is presented. This algorithm is an alternative to traditional methods because it transforms its attainments into root determination problems within defined intervals. The proposed approach allows for tr calculation, with accuracy determined by the user of the methods, and significant reductions in computational time; it can also be used to evaluate the performance of other prediction methods.

  10. The use of experimental bending tests to more accurate numerical description of TBC damage process

    NASA Astrophysics Data System (ADS)

    Sadowski, T.; Golewski, P.

    2016-04-01

    Thermal barrier coatings (TBCs) have been extensively used in aircraft engines to protect critical engine parts such as blades and combustion chambers, which are exposed to high temperatures and corrosive environment. The blades of turbine engines are additionally exposed to high mechanical loads. These loads are created by the high rotational speed of the rotor (30 000 rot/min), causing the tensile and bending stresses. Therefore, experimental testing of coated samples is necessary in order to determine strength properties of TBCs. Beam samples with dimensions 50×10×2 mm were used in those studies. The TBC system consisted of 150 μm thick bond coat (NiCoCrAlY) and 300 μm thick top coat (YSZ) made by APS (air plasma spray) process. Samples were tested by three-point bending test with various loads. After bending tests, the samples were subjected to microscopic observation to determine the quantity of cracks and their depth. The above mentioned results were used to build numerical model and calibrate material data in Abaqus program. Brittle cracking damage model was applied for the TBC layer, which allows to remove elements after reaching criterion. Surface based cohesive behavior was used to model the delamination which may occur at the boundary between bond coat and top coat.

  11. On the numerical integration of FPU-like systems

    NASA Astrophysics Data System (ADS)

    Benettin, G.; Ponno, A.

    2011-03-01

    This paper concerns the numerical integration of systems of harmonic oscillators coupled by nonlinear terms, like the common FPU models. We show that the most used integration algorithm, namely leap-frog, behaves very gently with such models, preserving in a beautiful way some peculiar features which are known to be very important in the dynamics, in particular the “selection rules” which regulate the interaction among normal modes. This explains why leap-frog, in spite of being a low order algorithm, behaves so well, as numerical experimentalists always observed. At the same time, we show how the algorithm can be improved by introducing, at a low cost, a “counterterm” which eliminates the dominant numerical error.

  12. An accurate and efficient acoustic eigensolver based on a fast multipole BEM and a contour integral method

    NASA Astrophysics Data System (ADS)

    Zheng, Chang-Jun; Gao, Hai-Feng; Du, Lei; Chen, Hai-Bo; Zhang, Chuanzeng

    2016-01-01

    An accurate numerical solver is developed in this paper for eigenproblems governed by the Helmholtz equation and formulated through the boundary element method. A contour integral method is used to convert the nonlinear eigenproblem into an ordinary eigenproblem, so that eigenvalues can be extracted accurately by solving a set of standard boundary element systems of equations. In order to accelerate the solution procedure, the parameters affecting the accuracy and efficiency of the method are studied and two contour paths are compared. Moreover, a wideband fast multipole method is implemented with a block IDR (s) solver to reduce the overall solution cost of the boundary element systems of equations with multiple right-hand sides. The Burton-Miller formulation is employed to identify the fictitious eigenfrequencies of the interior acoustic problems with multiply connected domains. The actual effect of the Burton-Miller formulation on tackling the fictitious eigenfrequency problem is investigated and the optimal choice of the coupling parameter as α = i / k is confirmed through exterior sphere examples. Furthermore, the numerical eigenvalues obtained by the developed method are compared with the results obtained by the finite element method to show the accuracy and efficiency of the developed method.

  13. Wakeful rest promotes the integration of spatial memories into accurate cognitive maps.

    PubMed

    Craig, Michael; Dewar, Michaela; Harris, Mathew A; Della Sala, Sergio; Wolbers, Thomas

    2016-02-01

    Flexible spatial navigation, e.g. the ability to take novel shortcuts, is contingent upon accurate mental representations of environments-cognitive maps. These cognitive maps critically depend on hippocampal place cells. In rodents, place cells replay recently travelled routes, especially during periods of behavioural inactivity (sleep/wakeful rest). This neural replay is hypothesised to promote not only the consolidation of specific experiences, but also their wider integration, e.g. into accurate cognitive maps. In humans, rest promotes the consolidation of specific experiences, but the effect of rest on the wider integration of memories remained unknown. In the present study, we examined the hypothesis that cognitive map formation is supported by rest-related integration of new spatial memories. We predicted that if wakeful rest supports cognitive map formation, then rest should enhance knowledge of overarching spatial relations that were never experienced directly during recent navigation. Forty young participants learned a route through a virtual environment before either resting wakefully or engaging in an unrelated perceptual task for 10 min. Participants in the wakeful rest condition performed more accurately in a delayed cognitive map test, requiring the pointing to landmarks from a range of locations. Importantly, the benefit of rest could not be explained by active rehearsal, but can be attributed to the promotion of consolidation-related activity. These findings (i) resonate with the demonstration of hippocampal replay in rodents, and (ii) provide the first evidence that wakeful rest can improve the integration of new spatial memories in humans, a function that has, hitherto, been associated with sleep.

  14. Using MACSYMA to drive numerical methods to computer radiation integrals

    SciTech Connect

    Clark, B.A.

    1986-01-01

    Because the emission of thermal radiation is characterized by the Planck emission spectrum, a multigroup solution of the thermal-radiation transport equation demands the calculation of definite integrals of the Planck spectrum. In the past, many approximate methods have been used with varying degrees of accuracy and efficiency. This paper describes how a symbolic algebra package, in this case MACSYMA is used to develop new methods for accurately and efficiently evaluating multigroup Planck integrals. The advantage of using a symbolic algebra package is that the job of developing the new methods is accomplished more efficiently.

  15. Ensemble-type numerical uncertainty information from single model integrations

    SciTech Connect

    Rauser, Florian Marotzke, Jochem; Korn, Peter

    2015-07-01

    We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of the influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.

  16. Stability of numerical integration techniques for transient rotor dynamics

    NASA Technical Reports Server (NTRS)

    Kascak, A. F.

    1977-01-01

    A finite element model of a rotor bearing system was analyzed to determine the stability limits of the forward, backward, and centered Euler; Runge-Kutta; Milne; and Adams numerical integration techniques. The analysis concludes that the highest frequency mode determines the maximum time step for a stable solution. Thus, the number of mass elements should be minimized. Increasing the damping can sometimes cause numerical instability. For a uniform shaft, with 10 mass elements, operating at approximately the first critical speed, the maximum time step for the Runge-Kutta, Milne, and Adams methods is that which corresponds to approximately 1 degree of shaft movement. This is independent of rotor dimensions.

  17. Orbits of real and fictitious asteroids studied by numerical integration

    NASA Astrophysics Data System (ADS)

    Schubart, J.

    1994-05-01

    The paper starts with a review of the author's various numerical studies on asteroid orbits, ruled by the violent evolution of the computer technique, and continues with a collection of starting values of orbital elements. This collection supplements the author's numerous papers on orbits at resonances of mean motion with respect to Jupiter. Especially, it refers to work on Trojan-type motion, mainly done together with R. Bien, and to the Hilda and Hecuba cases of resonance. It will allow the extension of intervals covered by numerical integration in interesting cases. The collection contains hitherto unpublished examples of orbits and additional comments. In particular, special remarks and some new results refer to low-eccentricity motion of Hecuba type.

  18. An accurate numerical solution to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in rivers

    NASA Astrophysics Data System (ADS)

    Stecca, Guglielmo; Siviglia, Annunziato; Blom, Astrid

    2016-07-01

    We present an accurate numerical approximation to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in one space dimension. Our solution procedure originates from the fully-unsteady matrix-vector formulation developed in [54]. The principal part of the problem is solved by an explicit Finite Volume upwind method of the path-conservative type, by which all the variables are updated simultaneously in a coupled fashion. The solution to the principal part is embedded into a splitting procedure for the treatment of frictional source terms. The numerical scheme is extended to second-order accuracy and includes a bookkeeping procedure for handling the evolution of size stratification in the substrate. We develop a concept of balancedness for the vertical mass flux between the substrate and active layer under bed degradation, which prevents the occurrence of non-physical oscillations in the grainsize distribution of the substrate. We suitably modify the numerical scheme to respect this principle. We finally verify the accuracy in our solution to the equations, and its ability to reproduce one-dimensional morphodynamics due to streamwise and vertical sorting, using three test cases. In detail, (i) we empirically assess the balancedness of vertical mass fluxes under degradation; (ii) we study the convergence to the analytical linearised solution for the propagation of infinitesimal-amplitude waves [54], which is here employed for the first time to assess a mixed-sediment model; (iii) we reproduce Ribberink's E8-E9 flume experiment [46].

  19. A robust and accurate numerical method for transcritical turbulent flows at supercritical pressure with an arbitrary equation of state

    NASA Astrophysics Data System (ADS)

    Kawai, Soshi; Terashima, Hiroshi; Negishi, Hideyo

    2015-11-01

    This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture the steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier-Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.

  20. Accurate and automatic extrinsic calibration method for blade measurement system integrated by different optical sensors

    NASA Astrophysics Data System (ADS)

    He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu

    2014-11-01

    Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.

  1. Simple and Efficient Numerical Evaluation of Near-Hypersingular Integrals

    NASA Technical Reports Server (NTRS)

    Fink, Patrick W.; Wilton, Donald R.; Khayat, Michael A.

    2007-01-01

    Recently, significant progress has been made in the handling of singular and nearly-singular potential integrals that commonly arise in the Boundary Element Method (BEM). To facilitate object-oriented programming and handling of higher order basis functions, cancellation techniques are favored over techniques involving singularity subtraction. However, gradients of the Newton-type potentials, which produce hypersingular kernels, are also frequently required in BEM formulations. As is the case with the potentials, treatment of the near-hypersingular integrals has proven more challenging than treating the limiting case in which the observation point approaches the surface. Historically, numerical evaluation of these near-hypersingularities has often involved a two-step procedure: a singularity subtraction to reduce the order of the singularity, followed by a boundary contour integral evaluation of the extracted part. Since this evaluation necessarily links basis function, Green s function, and the integration domain (element shape), the approach ill fits object-oriented programming concepts. Thus, there is a need for cancellation-type techniques for efficient numerical evaluation of the gradient of the potential. Progress in the development of efficient cancellation-type procedures for the gradient potentials was recently presented. To the extent possible, a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. However, since the gradient kernel involves singularities of different orders, we also require that the transformation leaves remaining terms that are analytic. The terms "normal" and "tangential" are used herein with reference to the source element. Also, since computational formulations often involve the numerical evaluation of both potentials and their gradients, it is highly desirable that a single integration procedure efficiently handles both.

  2. Fast and accurate computation of system matrix for area integral model-based algebraic reconstruction technique

    NASA Astrophysics Data System (ADS)

    Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua

    2014-11-01

    Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.

  3. Numerical Algorithms for Acoustic Integrals - The Devil is in the Details

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.

    1996-01-01

    The accurate prediction of the aeroacoustic field generated by aerospace vehicles or nonaerospace machinery is necessary for designers to control and reduce source noise. Powerful computational aeroacoustic methods, based on various acoustic analogies (primarily the Lighthill acoustic analogy) and Kirchhoff methods, have been developed for prediction of noise from complicated sources, such as rotating blades. Both methods ultimately predict the noise through a numerical evaluation of an integral formulation. In this paper, we consider three generic acoustic formulations and several numerical algorithms that have been used to compute the solutions to these formulations. Algorithms for retarded-time formulations are the most efficient and robust, but they are difficult to implement for supersonic-source motion. Collapsing-sphere and emission-surface formulations are good alternatives when supersonic-source motion is present, but the numerical implementations of these formulations are more computationally demanding. New algorithms - which utilize solution adaptation to provide a specified error level - are needed.

  4. Development of highly accurate approximate scheme for computing the charge transfer integral.

    PubMed

    Pershin, Anton; Szalay, Péter G

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the "exact" scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the "exact" calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature. PMID:26298117

  5. Development of highly accurate approximate scheme for computing the charge transfer integral

    SciTech Connect

    Pershin, Anton; Szalay, Péter G.

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the “exact” scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the “exact” calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.

  6. An Improved Numerical Integration Method for Springback Predictions

    NASA Astrophysics Data System (ADS)

    Ibrahim, R.; Smith, L. M.; Golovashchenko, Sergey F.

    2011-08-01

    In this investigation, the focus is on the springback of steel sheets in V-die air bending. A full replication to a numerical integration algorithm presented rigorously in [1] to predict the springback in air bending was performed and confirmed successfully. Algorithm alteration and extensions were proposed here. The altered approach used in solving the moment equation numerically resulted in springback values much closer to the trend presented by the experimental data, Although investigation here extended to use a more realistic work-hardening model, the differences in the springback values obtained by both hardening models were almost negligible. The algorithm was extended to be applied on thin sheets down to 0.8 mm. Results show that this extension is possible as verified by FEA and other published experiments on TRIP steel sheets.

  7. Singularity Preserving Numerical Methods for Boundary Integral Equations

    NASA Technical Reports Server (NTRS)

    Kaneko, Hideaki (Principal Investigator)

    1996-01-01

    In the past twelve months (May 8, 1995 - May 8, 1996), under the cooperative agreement with Division of Multidisciplinary Optimization at NASA Langley, we have accomplished the following five projects: a note on the finite element method with singular basis functions; numerical quadrature for weakly singular integrals; superconvergence of degenerate kernel method; superconvergence of the iterated collocation method for Hammersteion equations; and singularity preserving Galerkin method for Hammerstein equations with logarithmic kernel. This final report consists of five papers describing these projects. Each project is preceeded by a brief abstract.

  8. Trigonometrically fitted two step hybrid method for the numerical integration of second order IVPs

    NASA Astrophysics Data System (ADS)

    Monovasilis, Th.; Kalogiratou, Z.; Simos, T. E.

    2016-06-01

    In this work we consider the numerical integration of second order ODEs where the first derivative is missing. We construct trigonometrically fitted two step hybrid methods. We apply the new methods on the numerical integration of several test problems.

  9. Quantitative evaluation of numerical integration schemes for Lagrangian particle dispersion models

    NASA Astrophysics Data System (ADS)

    Ramli, Huda Mohd.; Esler, J. Gavin

    2016-07-01

    A rigorous methodology for the evaluation of integration schemes for Lagrangian particle dispersion models (LPDMs) is presented. A series of one-dimensional test problems are introduced, for which the Fokker-Planck equation is solved numerically using a finite-difference discretisation in physical space and a Hermite function expansion in velocity space. Numerical convergence errors in the Fokker-Planck equation solutions are shown to be much less than the statistical error associated with a practical-sized ensemble (N = 106) of LPDM solutions; hence, the former can be used to validate the latter. The test problems are then used to evaluate commonly used LPDM integration schemes. The results allow for optimal time-step selection for each scheme, given a required level of accuracy. The following recommendations are made for use in operational models. First, if computational constraints require the use of moderate to long time steps, it is more accurate to solve the random displacement model approximation to the LPDM rather than use existing schemes designed for long time steps. Second, useful gains in numerical accuracy can be obtained, at moderate additional computational cost, by using the relatively simple "small-noise" scheme of Honeycutt.

  10. ACCURATE ORBITAL INTEGRATION OF THE GENERAL THREE-BODY PROBLEM BASED ON THE D'ALEMBERT-TYPE SCHEME

    SciTech Connect

    Minesaki, Yukitaka

    2013-03-15

    We propose an accurate orbital integration scheme for the general three-body problem that retains all conserved quantities except angular momentum. The scheme is provided by an extension of the d'Alembert-type scheme for constrained autonomous Hamiltonian systems. Although the proposed scheme is merely second-order accurate, it can precisely reproduce some periodic, quasiperiodic, and escape orbits. The Levi-Civita transformation plays a role in designing the scheme.

  11. Accurate Orbital Integration of the General Three-body Problem Based on the d'Alembert-type Scheme

    NASA Astrophysics Data System (ADS)

    Minesaki, Yukitaka

    2013-03-01

    We propose an accurate orbital integration scheme for the general three-body problem that retains all conserved quantities except angular momentum. The scheme is provided by an extension of the d'Alembert-type scheme for constrained autonomous Hamiltonian systems. Although the proposed scheme is merely second-order accurate, it can precisely reproduce some periodic, quasiperiodic, and escape orbits. The Levi-Civita transformation plays a role in designing the scheme.

  12. A fast numerical solution of scattering by a cylinder: Spectral method for the boundary integral equations

    NASA Technical Reports Server (NTRS)

    Hu, Fang Q.

    1994-01-01

    It is known that the exact analytic solutions of wave scattering by a circular cylinder, when they exist, are not in a closed form but in infinite series which converges slowly for high frequency waves. In this paper, we present a fast number solution for the scattering problem in which the boundary integral equations, reformulated from the Helmholtz equation, are solved using a Fourier spectral method. It is shown that the special geometry considered here allows the implementation of the spectral method to be simple and very efficient. The present method differs from previous approaches in that the singularities of the integral kernels are removed and dealt with accurately. The proposed method preserves the spectral accuracy and is shown to have an exponential rate of convergence. Aspects of efficient implementation using FFT are discussed. Moreover, the boundary integral equations of combined single and double-layer representation are used in the present paper. This ensures the uniqueness of the numerical solution for the scattering problem at all frequencies. Although a strongly singular kernel is encountered for the Neumann boundary conditions, we show that the hypersingularity can be handled easily in the spectral method. Numerical examples that demonstrate the validity of the method are also presented.

  13. Carbon Dioxide Dispersion in the Combustion Integrated Rack Simulated Numerically

    NASA Technical Reports Server (NTRS)

    Wu, Ming-Shin; Ruff, Gary A.

    2004-01-01

    When discharged into an International Space Station (ISS) payload rack, a carbon dioxide (CO2) portable fire extinguisher (PFE) must extinguish a fire by decreasing the oxygen in the rack by 50 percent within 60 sec. The length of time needed for this oxygen reduction throughout the rack and the length of time that the CO2 concentration remains high enough to prevent the fire from reigniting is important when determining the effectiveness of the response and postfire procedures. Furthermore, in the absence of gravity, the local flow velocity can make the difference between a fire that spreads rapidly and one that self-extinguishes after ignition. A numerical simulation of the discharge of CO2 from PFE into the Combustion Integrated Rack (CIR) in microgravity was performed to obtain the local velocity and CO2 concentration. The complicated flow field around the PFE nozzle exits was modeled by sources of equivalent mass and momentum flux at a location downstream of the nozzle. The time for the concentration of CO2 to reach a level that would extinguish a fire anywhere in the rack was determined using the Fire Dynamics Simulator (FDS), a computational fluid dynamics code developed by the National Institute of Standards and Technology specifically to evaluate the development of a fire and smoke transport. The simulation shows that CO2, as well as any smoke and combustion gases produced by a fire, would be discharged into the ISS cabin through the resource utility panel at the bottom of the rack. These simulations will be validated by comparing the results with velocity and CO2 concentration measurements obtained during the fire suppression system verification tests conducted on the CIR in March 2003. Once these numerical simulations are validated, portions of the ISS labs and living areas will be modeled to determine the local flow conditions before, during, and after a fire event. These simulations can yield specific information about how long it takes for smoke and

  14. Black shale weathering: An integrated field and numerical modeling study

    NASA Astrophysics Data System (ADS)

    Bolton, E. W.; Wildman, R. A., Jr.; Berner, R. A.; Eckert, J. O., Jr.; Petsch, S. T.; Mok, U.; Evans, B.

    2003-04-01

    We present an integrated study of black shale weathering in a near surface environment. Implications of this study contribute to our understanding of organic matter oxidation in uplifted sediments, along with erosion and reburial of ancient unoxidized organic matter, as major controls on atmospheric oxygen levels over geologic time. The field study used to launch the modeling effort is based on core samples from central-eastern Kentucky near Clay City (Late Devonian New Albany/Ohio Shale), where the strata are essentially horizontal. Samples from various depth intervals (up to 12 m depth) were analyzed for texture (SEM images), porosity fraction (0.02 to 0.1), and horizontal and vertical permeability (water and air permeabilities differ due to the fine-grained nature of the sediments, but are on the order of 0.01 to 1. millidarcies, respectively). Chemical analyses were also performed for per cent C, N, S, and basic mineralogy was determined (clays, quartz, pyrite, in addition to organic matter). The samples contained from 2 to 15 per cent ancient (non-modern soil) organic matter. These results were used in the creation of a numerical model for kinetically controlled oxidation of the organic matter within the shale (based on kinetics from Chang and Berner, 1999). The one-dimensional model includes erosion, oxygen diffusion in the partially saturated vadose zone as well as water percolation and solute transport. This study extends the studies of Petsch (2000) and the weathering component of Lasaga and Ohmoto (2002) to include more reactions (e.g., pyrite oxidation to sulfuric acid and weathering of silicates due to low pH) and to resolve the near-surface boundary layer. The model provides a convenient means of exploring the influence of variable rates of erosion, oxygen level, rainfall, as well as physical and chemical characteristics of the shale on organic matter oxidation.

  15. Data Integrity: Why Aren't the Data Accurate? AIR 1989 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Gose, Frank J.

    The accuracy and reliability aspects of data integrity are discussed, with an emphasis on the need for consistency in responsibility and authority. A variety of ways in which data integrity can be compromised are discussed. The following sources of data corruption are described, and the ease or difficulty of identification and suggested actions…

  16. Integrating Numerical Computation into the Modeling Instruction Curriculum

    ERIC Educational Resources Information Center

    Caballero, Marcos D.; Burk, John B.; Aiken, John M.; Thoms, Brian D.; Douglas, Scott S.; Scanlon, Erin M.; Schatz, Michael F.

    2014-01-01

    Numerical computation (the use of a computer to solve, simulate, or visualize a physical problem) has fundamentally changed the way scientific research is done. Systems that are too difficult to solve in closed form are probed using computation. Experiments that are impossible to perform in the laboratory are studied numerically. Consequently, in…

  17. Numerical Integration with GeoGebra in High School

    ERIC Educational Resources Information Center

    Herceg, Dorde; Herceg, Dragoslav

    2010-01-01

    The concept of definite integral is almost always introduced as the Riemann integral, which is defined in terms of the Riemann sum, and its geometric interpretation. This definition is hard to understand for high school students. With the aid of mathematical software for visualisation and computation of approximate integrals, the notion of…

  18. Full Wave Simulation of Integrated Circuits Using Hybrid Numerical Methods

    NASA Astrophysics Data System (ADS)

    Tan, Jilin

    Transmission lines play an important role in digital electronics, and in microwave and millimeter-wave circuits. Analysis, modeling, and design of transmission lines are critical to the development of the circuitry in the chip, subsystem, and system levels. In the past several decays, at the EM modeling level, the quasi-static approximation has been widely used due to its great simplicity. As the clock rates increase, the inter-connect effects such as signal delay, distortion, dispersion, reflection, and crosstalk, limit the performance of microwave systems. Meanwhile, the quasi-static approach loses its validity for some complex system structures. Since the successful system design of the PCB, MCM, and the chip packaging, rely very much on the computer aided EM level modeling and simulation, many new methods have been developed, such as the full wave approach, to guarantee the successful design. Many difficulties exist in the rigorous EM level analysis. Some of these include the difficulties in describing the behavior of the conductors with finite thickness and finite conductivity, the field singularity, and the arbitrary multilayered multi-transmission lines structures. This dissertation concentrates on the full wave study of the multi-conductor transmission lines with finite conductivity and finite thickness buried in an arbitrary lossy multilayered environment. Two general approaches have been developed. The first one is the integral equation method in which the dyadic Green's function for arbitrary layered media has been correctly formulated and has been tested both analytically and numerically. By applying this method, the double layered high dielectric permitivitty problem and the heavy dielectrical lossy problem in multilayered media in the CMOS circuit design have been solved. The second approach is the edge element method. In this study, the correct functional for the two dimensional propagation problem has been successfully constructed in a rigorous way

  19. A simple and accurate algorithm for path integral molecular dynamics with the Langevin thermostat.

    PubMed

    Liu, Jian; Li, Dezhang; Liu, Xinzijian

    2016-07-14

    We introduce a novel simple algorithm for thermostatting path integral molecular dynamics (PIMD) with the Langevin equation. The staging transformation of path integral beads is employed for demonstration. The optimum friction coefficients for the staging modes in the free particle limit are used for all systems. In comparison to the path integral Langevin equation thermostat, the new algorithm exploits a different order of splitting for the phase space propagator associated to the Langevin equation. While the error analysis is made for both algorithms, they are also employed in the PIMD simulations of three realistic systems (the H2O molecule, liquid para-hydrogen, and liquid water) for comparison. It is shown that the new thermostat increases the time interval of PIMD by a factor of 4-6 or more for achieving the same accuracy. In addition, the supplementary material shows the error analysis made for the algorithms when the normal-mode transformation of path integral beads is used.

  20. A simple and accurate algorithm for path integral molecular dynamics with the Langevin thermostat

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Li, Dezhang; Liu, Xinzijian

    2016-07-01

    We introduce a novel simple algorithm for thermostatting path integral molecular dynamics (PIMD) with the Langevin equation. The staging transformation of path integral beads is employed for demonstration. The optimum friction coefficients for the staging modes in the free particle limit are used for all systems. In comparison to the path integral Langevin equation thermostat, the new algorithm exploits a different order of splitting for the phase space propagator associated to the Langevin equation. While the error analysis is made for both algorithms, they are also employed in the PIMD simulations of three realistic systems (the H2O molecule, liquid para-hydrogen, and liquid water) for comparison. It is shown that the new thermostat increases the time interval of PIMD by a factor of 4-6 or more for achieving the same accuracy. In addition, the supplementary material shows the error analysis made for the algorithms when the normal-mode transformation of path integral beads is used.

  1. Numerical parameter constraints for accurate PIC-DSMC simulation of breakdown from arc initiation to stable arcs

    NASA Astrophysics Data System (ADS)

    Moore, Christopher; Hopkins, Matthew; Moore, Stan; Boerner, Jeremiah; Cartwright, Keith

    2015-09-01

    Simulation of breakdown is important for understanding and designing a variety of applications such as mitigating undesirable discharge events. Such simulations need to be accurate through early time arc initiation to late time stable arc behavior. Here we examine constraints on the timestep and mesh size required for arc simulations using the particle-in-cell (PIC) method with direct simulation Monte Carlo (DMSC) collisions. Accurate simulation of electron avalanche across a fixed voltage drop and constant neutral density (reduced field of 1000 Td) was found to require a timestep ~ 1/100 of the mean time between collisions and a mesh size ~ 1/25 the mean free path. These constraints are much smaller than the typical PIC-DSMC requirements for timestep and mesh size. Both constraints are related to the fact that charged particles are accelerated by the external field. Thus gradients in the electron energy distribution function can exist at scales smaller than the mean free path and these must be resolved by the mesh size for accurate collision rates. Additionally, the timestep must be small enough that the particle energy change due to the fields be small in order to capture gradients in the cross sections versus energy. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  2. An accurate method for evaluating the kernel of the integral equation relating lift to downwash in unsteady potential flow

    NASA Technical Reports Server (NTRS)

    Desmarais, R. N.

    1982-01-01

    The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.

  3. Implicit numerical integration for periodic solutions of autonomous nonlinear systems

    NASA Technical Reports Server (NTRS)

    Thurston, G. A.

    1982-01-01

    A change of variables that stabilizes numerical computations for periodic solutions of autonomous systems is derived. Computation of the period is decoupled from the rest of the problem for conservative systems of any order and for any second-order system. Numerical results are included for a second-order conservative system under a suddenly applied constant load. Near the critical load for the system, a small increment in load amplitude results in a large increase in amplitude of the response.

  4. Multidimensional Genome-wide Analyses Show Accurate FVIII Integration by ZFN in Primary Human Cells

    PubMed Central

    Sivalingam, Jaichandran; Kenanov, Dimitar; Han, Hao; Nirmal, Ajit Johnson; Ng, Wai Har; Lee, Sze Sing; Masilamani, Jeyakumar; Phan, Toan Thang; Maurer-Stroh, Sebastian; Kon, Oi Lian

    2016-01-01

    Costly coagulation factor VIII (FVIII) replacement therapy is a barrier to optimal clinical management of hemophilia A. Therapy using FVIII-secreting autologous primary cells is potentially efficacious and more affordable. Zinc finger nucleases (ZFN) mediate transgene integration into the AAVS1 locus but comprehensive evaluation of off-target genome effects is currently lacking. In light of serious adverse effects in clinical trials which employed genome-integrating viral vectors, this study evaluated potential genotoxicity of ZFN-mediated transgenesis using different techniques. We employed deep sequencing of predicted off-target sites, copy number analysis, whole-genome sequencing, and RNA-seq in primary human umbilical cord-lining epithelial cells (CLECs) with AAVS1 ZFN-mediated FVIII transgene integration. We combined molecular features to enhance the accuracy and activity of ZFN-mediated transgenesis. Our data showed a low frequency of ZFN-associated indels, no detectable off-target transgene integrations or chromosomal rearrangements. ZFN-modified CLECs had very few dysregulated transcripts and no evidence of activated oncogenic pathways. We also showed AAVS1 ZFN activity and durable FVIII transgene secretion in primary human dermal fibroblasts, bone marrow- and adipose tissue-derived stromal cells. Our study suggests that, with close attention to the molecular design of genome-modifying constructs, AAVS1 ZFN-mediated FVIII integration in several primary human cell types may be safe and efficacious. PMID:26689265

  5. Accurate integration of segmented x-ray optics using interfacing ribs

    NASA Astrophysics Data System (ADS)

    Civitani, Marta Maria; Basso, Stefano; Citterio, Oberto; Conconi, Paolo; Ghigo, Mauro; Pareschi, Giovanni; Proserpio, Laura; Salmaso, Bianca; Sironi, Giorgia; Spiga, Daniele; Tagliaferri, Gianpiero; Zambra, Alberto; Martelli, Francesco; Parodi, Giancarlo; Fumi, Pierluigi; Gallieni, Daniele; Tintori, Matteo; Bavdaz, Marcos; Wille, Eric

    2013-09-01

    Future lightweight and long-focal-length x-ray telescopes must guarantee a good angular resolution (e.g., 5 arc sec HEW) and reach an unprecedented large effective area. This goal can be reached with the slumping of borosilicate glass sheets that allow the fabrication of lightweight and low-cost x-ray optical units (XOU). These XOUs, based on mirror segments, have to be assembled together to form complete multishell Wolter-I optics. The technology for the fabrication and the integration of these XOUs is under development in Europe, funded by European Space Agency, and led by the Brera Observatory (INAF-OAB). While the achievement of the required surface accuracy on the glass segments by means of a hot slumping technique is a challenging aspect, adequate attention must be given to the correct integration and coalignment of the mirror segments into the XOUs. To this aim, an innovative assembly concept has been investigated, based on glass reinforcing ribs. The ribs connect pairs of consecutive foils, stacked into a XOU, with both structural and functional roles, providing robust monolithic stacks of mirror plates. Moreover, this integration concept allows the correction of residual low-frequency errors still present on the mirror foil profile after slumping. We present the integration concept, the related error budget, and the results achieved so far with a semi-robotic integration machine especially designed and realized to assemble slumped glass foils into XOUs.

  6. Experimental analysis and numerical modeling of mollusk shells as a three dimensional integrated volume.

    PubMed

    Faghih Shojaei, M; Mohammadi, V; Rajabi, H; Darvizeh, A

    2012-12-01

    In this paper, a new numerical technique is presented to accurately model the geometrical and mechanical features of mollusk shells as a three dimensional (3D) integrated volume. For this purpose, the Newton method is used to solve the nonlinear equations of shell surfaces. The points of intersection on the shell surface are identified and the extra interior parts are removed. Meshing process is accomplished with respect to the coordinate of each point of intersection. The final 3D generated mesh models perfectly describe the spatial configuration of the mollusk shells. Moreover, the computational model perfectly matches with the actual interior geometry of the shells as well as their exterior architecture. The direct generation technique is employed to generate a 3D finite element (FE) model in ANSYS 11. X-ray images are taken to show the close similarity of the interior geometry of the models and the actual samples. A scanning electron microscope (SEM) is used to provide information on the microstructure of the shells. In addition, a set of compression tests were performed on gastropod shell specimens to obtain their ultimate compressive strength. A close agreement between experimental data and the relevant numerical results is demonstrated. PMID:23137621

  7. Switched integration amplifier-based photocurrent meter for accurate spectral responsivity measurement of photometers.

    PubMed

    Park, Seongchong; Hong, Kee-Suk; Kim, Wan-Seop

    2016-03-20

    This work introduces a switched integration amplifier (SIA)-based photocurrent meter for femtoampere (fA)-level current measurement, which enables us to measure a 107 dynamic range of spectral responsivity of photometers even with a common lamp-based monochromatic light source. We described design considerations and practices about operational amplifiers (op-amps), switches, readout methods, etc., to compose a stable SIA of low offset current in terms of leakage current and gain peaking in detail. According to the design, we made six SIAs of different integration capacitance and different op-amps and evaluated their offset currents. They showed an offset current of (1.5-85) fA with a slow variation of (0.5-10) fA for an hour under opened input. Applying a detector to the SIA input, the offset current and its variation were increased and the SIA readout became noisier due to finite shunt resistance and nonzero shunt capacitance of the detector. One of the SIAs with 10 pF nominal capacitance was calibrated using a calibrated current source at the current level of 10 nA to 1 fA and at the integration time of 2 to 65,536 ms. As a result, we obtained a calibration formula for integration capacitance as a function of integration time rather than a single capacitance value because the SIA readout showed a distinct dependence on integration time at a given current level. Finally, we applied it to spectral responsivity measurement of a photometer. It is demonstrated that the home-made SIA of 10 pF was capable of measuring a 107 dynamic range of spectral responsivity of a photometer. PMID:27140564

  8. Towards more accurate life cycle risk management through integration of DDP and PRA

    NASA Technical Reports Server (NTRS)

    Cornford, Steven L.; Paulos, Todd; Meshkat, Leila; Feather, Martin

    2003-01-01

    The focus of this paper is on the integration of PRA and DDP. The intent is twofold: to extend risk-based decision though more of the lifecycle, and to lead to improved risk modeling (hence better informed decision making) wherever it is applied, most especially in the early phases as designs begin to mature.

  9. Multi-omics integration accurately predicts cellular state in unexplored conditions for Escherichia coli

    PubMed Central

    Kim, Minseung; Rai, Navneet; Zorraquino, Violeta; Tagkopoulos, Ilias

    2016-01-01

    A significant obstacle in training predictive cell models is the lack of integrated data sources. We develop semi-supervised normalization pipelines and perform experimental characterization (growth, transcriptional, proteome) to create Ecomics, a consistent, quality-controlled multi-omics compendium for Escherichia coli with cohesive meta-data information. We then use this resource to train a multi-scale model that integrates four omics layers to predict genome-wide concentrations and growth dynamics. The genetic and environmental ontology reconstructed from the omics data is substantially different and complementary to the genetic and chemical ontologies. The integration of different layers confers an incremental increase in the prediction performance, as does the information about the known gene regulatory and protein-protein interactions. The predictive performance of the model ranges from 0.54 to 0.87 for the various omics layers, which far exceeds various baselines. This work provides an integrative framework of omics-driven predictive modelling that is broadly applicable to guide biological discovery. PMID:27713404

  10. Multi Sensor Data Integration for AN Accurate 3d Model Generation

    NASA Astrophysics Data System (ADS)

    Chhatkuli, S.; Satoh, T.; Tachibana, K.

    2015-05-01

    The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other's weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.

  11. Study of time-accurate integration of the variable-density Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoyi; Pantano, Carlos

    2015-11-01

    We present several theoretical elements that affect time-consistent integration of the low-Mach number approximation of variable-density Navier-Stokes equations. The goal is for velocity, pressure, density, and scalars to achieve uniform order of accuracy, consistent with the time integrator being used. We show examples of second-order (using Crank-Nicolson and Adams-Bashforth) and third-order (using additive semi-implicit Runge-Kutta) uniform convergence with the proposed conceptual framework. Furthermore, the consistent approach can be extended to other time integrators. In addition, the method is formulated using approximate/incomplete factorization methods for easy incorporation in existing solvers. One of the observed benefits of the proposed approach is improved stability, even for large density difference, in comparison with other existing formulations. A linearized stability analysis is also carried out for some test problems to better understand the behavior of the approach. This work was supported in part by the Department of Energy, National Nuclear Security Administration, under award no. DE-NA0002382 and the California Institute of Technology.

  12. The numerical integration and 3-D finite element formulation of a viscoelastic model of glass

    SciTech Connect

    Chambers, R.S.

    1994-08-01

    The use of glasses is widespread in making hermetic, insulating seals for many electronic components. Flat panel displays and fiber optic connectors are other products utilizing glass as a structural element. When glass is cooled from sealing temperatures, residual stresses are generated due to mismatches in thermal shrinkage created by the dissimilar material properties of the adjoining materials. Because glass is such a brittle material at room temperature, tensile residual stresses must be kept small to ensure durability and avoid cracking. Although production designs and the required manufacturing process development can be deduced empirically, this is an expensive and time consuming process that does not necessarily lead to an optimal design. Agile manufacturing demands that analyses be used to reduce development costs and schedules by providing insight and guiding the design process through the development cycle. To make these gains, however, viscoelastic models of glass must be available along with the right tool to use them. A viscoelastic model of glass can be used to simulate the stress and volume relaxation that occurs at elevated temperatures as the molecular structure of the glass seeks to equilibrate to the state of the supercooled liquid. The substance of the numerical treatment needed to support the implementation of the model in a 3-D finite element program is presented herein. An accurate second-order, central difference integrator is proposed for the constitutive equations, and numerical solutions are compared to those obtained with other integrators. Inherent convergence problems are reviewed and fixes are described. The resulting algorithms are generally applicable to the broad class of viscoelastic material models. First-order error estimates are used as a basis for developing a scheme for automatic time step controls, and several demonstration problems are presented to illustrate the performance of the methodology.

  13. Enabling fast, stable and accurate peridynamic computations using multi-time-step integration

    DOE PAGES

    Lindsay, P.; Parks, M. L.; Prakash, A.

    2016-04-13

    Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less

  14. Accurate Detection of Interaural Time Differences by a Population of Slowly Integrating Neurons

    NASA Astrophysics Data System (ADS)

    Vasilkov, Viacheslav A.; Tikidji-Hamburyan, Ruben A.

    2012-03-01

    For localization of a sound source, animals and humans process the microsecond interaural time differences of arriving sound waves. How nervous systems, consisting of elements with time constants of about and more than 1 ms, can reach such high precision is still an open question. In this Letter we present a hypothesis and show theoretical and computational evidence that a rather large population of slowly integrating neurons with inhibitory and excitatory inputs (EI neurons) can detect minute temporal disparities in input signals which are significantly less than any time constant in the system.

  15. Numerical integration of population models satisfying conservation laws: NSFD methods.

    PubMed

    Mickens, Ronald E

    2007-10-01

    Population models arising in ecology, epidemiology and mathematical biology may involve a conservation law, i.e. the total population is constant. In addition to these cases, other situations may occur for which the total population, asymptotically in time, approach a constant value. Since it is rarely the situation that the equations of motion can be analytically solved to obtain exact solutions, it follows that numerical techniques are needed to provide solutions. However, numerical procedures are only valid if they can reproduce fundamental properties of the differential equations modeling the phenomena of interest. We show that for population models, involving a dynamical conservation law the use of nonstandard finite difference (NSFD) methods allows the construction of discretization schemes such that they are dynamically consistent (DC) with the original differential equations. The paper will briefly discuss the NSFD methodology, the concept of DC, and illustrate their application to specific problems for population models.

  16. Numerical simulation of pharyngeal airflow applied to obstructive sleep apnea: effect of the nasal cavity in anatomically accurate airway models.

    PubMed

    Cisonni, Julien; Lucey, Anthony D; King, Andrew J C; Islam, Syed Mohammed Shamsul; Lewis, Richard; Goonewardene, Mithran S

    2015-11-01

    Repetitive brief episodes of soft-tissue collapse within the upper airway during sleep characterize obstructive sleep apnea (OSA), an extremely common and disabling disorder. Failure to maintain the patency of the upper airway is caused by the combination of sleep-related loss of compensatory dilator muscle activity and aerodynamic forces promoting closure. The prediction of soft-tissue movement in patient-specific airway 3D mechanical models is emerging as a useful contribution to clinical understanding and decision making. Such modeling requires reliable estimations of the pharyngeal wall pressure forces. While nasal obstruction has been recognized as a risk factor for OSA, the need to include the nasal cavity in upper-airway models for OSA studies requires consideration, as it is most often omitted because of its complex shape. A quantitative analysis of the flow conditions generated by the nasal cavity and the sinuses during inspiration upstream of the pharynx is presented. Results show that adequate velocity boundary conditions and simple artificial extensions of the flow domain can reproduce the essential effects of the nasal cavity on the pharyngeal flow field. Therefore, the overall complexity and computational cost of accurate flow predictions can be reduced.

  17. An integrative variant analysis pipeline for accurate genotype/haplotype inference in population NGS data

    PubMed Central

    Wang, Yi; Lu, James; Yu, Jin; Gibbs, Richard A.; Yu, Fuli

    2013-01-01

    Next-generation sequencing is a powerful approach for discovering genetic variation. Sensitive variant calling and haplotype inference from population sequencing data remain challenging. We describe methods for high-quality discovery, genotyping, and phasing of SNPs for low-coverage (approximately 5×) sequencing of populations, implemented in a pipeline called SNPTools. Our pipeline contains several innovations that specifically address challenges caused by low-coverage population sequencing: (1) effective base depth (EBD), a nonparametric statistic that enables more accurate statistical modeling of sequencing data; (2) variance ratio scoring, a variance-based statistic that discovers polymorphic loci with high sensitivity and specificity; and (3) BAM-specific binomial mixture modeling (BBMM), a clustering algorithm that generates robust genotype likelihoods from heterogeneous sequencing data. Last, we develop an imputation engine that refines raw genotype likelihoods to produce high-quality phased genotypes/haplotypes. Designed for large population studies, SNPTools' input/output (I/O) and storage aware design leads to improved computing performance on large sequencing data sets. We apply SNPTools to the International 1000 Genomes Project (1000G) Phase 1 low-coverage data set and obtain genotyping accuracy comparable to that of SNP microarray. PMID:23296920

  18. PSI: A Comprehensive and Integrative Approach for Accurate Plant Subcellular Localization Prediction

    PubMed Central

    Chen, Ming

    2013-01-01

    Predicting the subcellular localization of proteins conquers the major drawbacks of high-throughput localization experiments that are costly and time-consuming. However, current subcellular localization predictors are limited in scope and accuracy. In particular, most predictors perform well on certain locations or with certain data sets while poorly on others. Here, we present PSI, a novel high accuracy web server for plant subcellular localization prediction. PSI derives the wisdom of multiple specialized predictors via a joint-approach of group decision making strategy and machine learning methods to give an integrated best result. The overall accuracy obtained (up to 93.4%) was higher than best individual (CELLO) by ∼10.7%. The precision of each predicable subcellular location (more than 80%) far exceeds that of the individual predictors. It can also deal with multi-localization proteins. PSI is expected to be a powerful tool in protein location engineering as well as in plant sciences, while the strategy employed could be applied to other integrative problems. A user-friendly web server, PSI, has been developed for free access at http://bis.zju.edu.cn/psi/. PMID:24194827

  19. Accurate and efficient integration for molecular dynamics simulations at constant temperature and pressure.

    PubMed

    Lippert, Ross A; Predescu, Cristian; Ierardi, Douglas J; Mackenzie, Kenneth M; Eastwood, Michael P; Dror, Ron O; Shaw, David E

    2013-10-28

    In molecular dynamics simulations, control over temperature and pressure is typically achieved by augmenting the original system with additional dynamical variables to create a thermostat and a barostat, respectively. These variables generally evolve on timescales much longer than those of particle motion, but typical integrator implementations update the additional variables along with the particle positions and momenta at each time step. We present a framework that replaces the traditional integration procedure with separate barostat, thermostat, and Newtonian particle motion updates, allowing thermostat and barostat updates to be applied infrequently. Such infrequent updates provide a particularly substantial performance advantage for simulations parallelized across many computer processors, because thermostat and barostat updates typically require communication among all processors. Infrequent updates can also improve accuracy by alleviating certain sources of error associated with limited-precision arithmetic. In addition, separating the barostat, thermostat, and particle motion update steps reduces certain truncation errors, bringing the time-average pressure closer to its target value. Finally, this framework, which we have implemented on both general-purpose and special-purpose hardware, reduces software complexity and improves software modularity.

  20. Numerical simulation of scattering of acoustic waves by inelastic bodies using hypersingular boundary integral equation

    SciTech Connect

    Daeva, S.G.; Setukha, A.V.

    2015-03-10

    A numerical method for solving a problem of diffraction of acoustic waves by system of solid and thin objects based on the reduction the problem to a boundary integral equation in which the integral is understood in the sense of finite Hadamard value is proposed. To solve this equation we applied piecewise constant approximations and collocation methods numerical scheme. The difference between the constructed scheme and earlier known is in obtaining approximate analytical expressions to appearing system of linear equations coefficients by separating the main part of the kernel integral operator. The proposed numerical scheme is tested on the solution of the model problem of diffraction of an acoustic wave by inelastic sphere.

  1. Integrated numerical methods for hypersonic aircraft cooling systems analysis

    NASA Technical Reports Server (NTRS)

    Petley, Dennis H.; Jones, Stuart C.; Dziedzic, William M.

    1992-01-01

    Numerical methods have been developed for the analysis of hypersonic aircraft cooling systems. A general purpose finite difference thermal analysis code is used to determine areas which must be cooled. Complex cooling networks of series and parallel flow can be analyzed using a finite difference computer program. Both internal fluid flow and heat transfer are analyzed, because increased heat flow causes a decrease in the flow of the coolant. The steady state solution is a successive point iterative method. The transient analysis uses implicit forward-backward differencing. Several examples of the use of the program in studies of hypersonic aircraft and rockets are provided.

  2. iCut: an Integrative Cut Algorithm Enables Accurate Segmentation of Touching Cells

    PubMed Central

    He, Yong; Gong, Hui; Xiong, Benyi; Xu, Xiaofeng; Li, Anan; Jiang, Tao; Sun, Qingtao; Wang, Simin; Luo, Qingming; Chen, Shangbin

    2015-01-01

    Individual cells play essential roles in the biological processes of the brain. The number of neurons changes during both normal development and disease progression. High-resolution imaging has made it possible to directly count cells. However, the automatic and precise segmentation of touching cells continues to be a major challenge for massive and highly complex datasets. Thus, an integrative cut (iCut) algorithm, which combines information regarding spatial location and intervening and concave contours with the established normalized cut, has been developed. iCut involves two key steps: (1) a weighting matrix is first constructed with the abovementioned information regarding the touching cells and (2) a normalized cut algorithm that uses the weighting matrix is implemented to separate the touching cells into isolated cells. This novel algorithm was evaluated using two types of data: the open SIMCEP benchmark dataset and our micro-optical imaging dataset from a Nissl-stained mouse brain. It has achieved a promising recall/precision of 91.2 ± 2.1%/94.1 ± 1.8% and 86.8 ± 4.1%/87.5 ± 5.7%, respectively, for the two datasets. As quantified using the harmonic mean of recall and precision, the accuracy of iCut is higher than that of some state-of-the-art algorithms. The better performance of this fully automated algorithm can benefit studies of brain cytoarchitecture. PMID:26168908

  3. Accurate Prediction of Transposon-Derived piRNAs by Integrating Various Sequential and Physicochemical Features

    PubMed Central

    Luo, Longqiang; Li, Dingfang; Zhang, Wen; Tu, Shikui; Zhu, Xiaopeng; Tian, Gang

    2016-01-01

    Background Piwi-interacting RNA (piRNA) is the largest class of small non-coding RNA molecules. The transposon-derived piRNA prediction can enrich the research contents of small ncRNAs as well as help to further understand generation mechanism of gamete. Methods In this paper, we attempt to differentiate transposon-derived piRNAs from non-piRNAs based on their sequential and physicochemical features by using machine learning methods. We explore six sequence-derived features, i.e. spectrum profile, mismatch profile, subsequence profile, position-specific scoring matrix, pseudo dinucleotide composition and local structure-sequence triplet elements, and systematically evaluate their performances for transposon-derived piRNA prediction. Finally, we consider two approaches: direct combination and ensemble learning to integrate useful features and achieve high-accuracy prediction models. Results We construct three datasets, covering three species: Human, Mouse and Drosophila, and evaluate the performances of prediction models by 10-fold cross validation. In the computational experiments, direct combination models achieve AUC of 0.917, 0.922 and 0.992 on Human, Mouse and Drosophila, respectively; ensemble learning models achieve AUC of 0.922, 0.926 and 0.994 on the three datasets. Conclusions Compared with other state-of-the-art methods, our methods can lead to better performances. In conclusion, the proposed methods are promising for the transposon-derived piRNA prediction. The source codes and datasets are available in S1 File. PMID:27074043

  4. AN ACCURATE ORBITAL INTEGRATOR FOR THE RESTRICTED THREE-BODY PROBLEM AS A SPECIAL CASE OF THE DISCRETE-TIME GENERAL THREE-BODY PROBLEM

    SciTech Connect

    Minesaki, Yukitaka

    2013-08-01

    For the restricted three-body problem, we propose an accurate orbital integration scheme that retains all conserved quantities of the two-body problem with two primaries and approximately preserves the Jacobi integral. The scheme is obtained by taking the limit as mass approaches zero in the discrete-time general three-body problem. For a long time interval, the proposed scheme precisely reproduces various periodic orbits that cannot be accurately computed by other generic integrators.

  5. Many Is Better Than One: An Integration of Multiple Simple Strategies for Accurate Lung Segmentation in CT Images.

    PubMed

    Shi, Zhenghao; Ma, Jiejue; Zhao, Minghua; Liu, Yonghong; Feng, Yaning; Zhang, Ming; He, Lifeng; Suzuki, Kenji

    2016-01-01

    Accurate lung segmentation is an essential step in developing a computer-aided lung disease diagnosis system. However, because of the high variability of computerized tomography (CT) images, it remains a difficult task to accurately segment lung tissue in CT slices using a simple strategy. Motived by the aforementioned, a novel CT lung segmentation method based on the integration of multiple strategies was proposed in this paper. Firstly, in order to avoid noise, the input CT slice was smoothed using the guided filter. Then, the smoothed slice was transformed into a binary image using an optimized threshold. Next, a region growing strategy was employed to extract thorax regions. Then, lung regions were segmented from the thorax regions using a seed-based random walk algorithm. The segmented lung contour was then smoothed and corrected with a curvature-based correction method on each axis slice. Finally, with the lung masks, the lung region was automatically segmented from a CT slice. The proposed method was validated on a CT database consisting of 23 scans, including a number of 883 2D slices (the number of slices per scan is 38 slices), by comparing it to the commonly used lung segmentation method. Experimental results show that the proposed method accurately segmented lung regions in CT slices. PMID:27635395

  6. Many Is Better Than One: An Integration of Multiple Simple Strategies for Accurate Lung Segmentation in CT Images

    PubMed Central

    Zhao, Minghua; Liu, Yonghong; Feng, Yaning; Zhang, Ming; He, Lifeng; Suzuki, Kenji

    2016-01-01

    Accurate lung segmentation is an essential step in developing a computer-aided lung disease diagnosis system. However, because of the high variability of computerized tomography (CT) images, it remains a difficult task to accurately segment lung tissue in CT slices using a simple strategy. Motived by the aforementioned, a novel CT lung segmentation method based on the integration of multiple strategies was proposed in this paper. Firstly, in order to avoid noise, the input CT slice was smoothed using the guided filter. Then, the smoothed slice was transformed into a binary image using an optimized threshold. Next, a region growing strategy was employed to extract thorax regions. Then, lung regions were segmented from the thorax regions using a seed-based random walk algorithm. The segmented lung contour was then smoothed and corrected with a curvature-based correction method on each axis slice. Finally, with the lung masks, the lung region was automatically segmented from a CT slice. The proposed method was validated on a CT database consisting of 23 scans, including a number of 883 2D slices (the number of slices per scan is 38 slices), by comparing it to the commonly used lung segmentation method. Experimental results show that the proposed method accurately segmented lung regions in CT slices. PMID:27635395

  7. Theoretical study of the partial derivatives produced by numerical integration of satellite orbits.

    NASA Astrophysics Data System (ADS)

    Hadjifotinou, K. G.; Ichtiaroglou, S.

    1997-06-01

    For the two-body system Saturn-Mimas and the theoretical three-body non-resonant system Saturn-Mimas-Tethys we present a theoretical analysis of the behaviour of the partial derivatives of the satellites' coordinates with respect to the parameters of the system, namely the satellites' initial conditions and their mass-ratios over Saturn. With the use of Floquet theory for the stability of periodic orbits we prove that all the partial derivatives have amplitudes that increase linearly with time. Their motion is a combination of periodic motions the periods of which can also be accurately predicted by the theory. This theoretical model can be used for checking the accuracy of the results of the different numerical integration methods used on satellite systems with the purpose of fitting the results to observations or analytical theories. On this basis, in the last part of the paper we extend the investigation of Hadjifotinou & Harper (1995A&A...303..940H) on the stability and efficience of the 10^th^-order Gauss-Jackson backward difference and the Runge-Kutta-Nystroem RKN12(10)17M methods by now applying them to the above mentioned three-body system.

  8. An Integrated Numerical Hydrodynamic Shallow Flow-Solute Transport Model for Urban Area

    NASA Astrophysics Data System (ADS)

    Alias, N. A.; Mohd Sidek, L.

    2016-03-01

    The rapidly changing on land profiles in the some urban areas in Malaysia led to the increasing of flood risk. Extensive developments on densely populated area and urbanization worsen the flood scenario. An early warning system is really important and the popular method is by numerically simulating the river and flood flows. There are lots of two-dimensional (2D) flood model predicting the flood level but in some circumstances, still it is difficult to resolve the river reach in a 2D manner. A systematic early warning system requires a precisely prediction of flow depth. Hence a reliable one-dimensional (1D) model that provides accurate description of the flow is essential. Research also aims to resolve some of raised issues such as the fate of pollutant in river reach by developing the integrated hydrodynamic shallow flow-solute transport model. Presented in this paper are results on flow prediction for Sungai Penchala and the convection-diffusion of solute transports simulated by the developed model.

  9. iPE-MMR: An integrated approach to accurately assign monoisotopic precursor masses to tandem mass spectrometric data

    PubMed Central

    Jung, Hee-Jung; Purvine, Samuel O.; Kim, Hokeun; Petyuk, Vladislav A.; Hyung, Seok-Won; Monroe, Matthew E.; Mun, Dong-Gi; Kim, Kyong-Chul; Park, Jong-Moon; Kim, Su-Jin; Tolic, Nikola; Slysz, Gordon W.; Moore, Ronald J.; Zhao, Rui; Adkins, Joshua N.; Anderson, Gordon A.; Lee, Hookeun; Camp, David G.; Yu, Myeong-Hee; Smith, Richard D.; Lee, Sang-Won

    2010-01-01

    Accurate assignment of monoisotopic precursor masses to tandem mass spectrometric (MS/MS) data is a fundamental and critically important step for successful peptide identifications in mass spectrometry based proteomics. Here we describe an integrated approach that combines three previously reported methods of treating MS/MS data for precursor mass refinement. This combined method, “integrated Post-Experiment Monoisotopic Mass Refinement” (iPE-MMR), integrates steps: 1) generation of refined MS/MS data by DeconMSn; 2) additional refinement of the resultant MS/MS data by a modified version of PE-MMR; 3) elimination of systematic errors of precursor masses using DtaRefinery. iPE-MMR is the first method that utilizes all MS information from multiple MS scans of a precursor ion including multiple charge states, in an MS scan, to determine precursor mass. By combining these methods, iPE-MMR increases sensitivity in peptide identification and provides increased accuracy when applied to complex high-throughput proteomics data. PMID:20863060

  10. Numerical solution of a class of integral equations arising in two-dimensional aerodynamics

    NASA Technical Reports Server (NTRS)

    Fromme, J.; Golberg, M. A.

    1978-01-01

    We consider the numerical solution of a class of integral equations arising in the determination of the compressible flow about a thin airfoil in a ventilated wind tunnel. The integral equations are of the first kind with kernels having a Cauchy singularity. Using appropriately chosen Hilbert spaces, it is shown that the kernel gives rise to a mapping which is the sum of a unitary operator and a compact operator. This allows the problem to be studied in terms of an equivalent integral equation of the second kind. A convergent numerical algorithm for its solution is derived by using Galerkin's method. It is shown that this algorithm is numerically equivalent to Bland's collocation method, which is then used as the method of computation. Extensive numerical calculations are presented establishing the validity of the theory.

  11. Integrated numeric and symbolic signal processing using a heterogeneous design environment

    NASA Astrophysics Data System (ADS)

    Mani, Ramamurthy; Nawab, S. Hamid; Winograd, Joseph M.; Evans, Brian L.

    1996-10-01

    We present a solution to a complex multi-tone transient detection problem to illustrate the integrated use of symbolic and numeric processing techniques which are supported by well-established underlying models. Examples of such models include synchronous dataflow for numeric processing and the blackboard paradigm for symbolic heuristic search. Our transient detection solution serves to emphasize the importance of developing system design methods and tools which can support the integrated use of well- established symbolic and numerical models of computation. Recently, we incorporated a blackboard-based model of computation underlying the Integrated Processing and Understanding of Signals (IPUS) paradigm into a system-level design environment for numeric processing called Ptolemy. Using the IPUS/Ptolemy environment, we are implementing our solution to the multi-tone transient detection problem.

  12. A novel, integrated PET-guided MRS technique resulting in more accurate initial diagnosis of high-grade glioma.

    PubMed

    Kim, Ellen S; Satter, Martin; Reed, Marilyn; Fadell, Ronald; Kardan, Arash

    2016-06-01

    Glioblastoma multiforme (GBM) is the most common and lethal malignant glioma in adults. Currently, the modality of choice for diagnosing brain tumor is high-resolution magnetic resonance imaging (MRI) with contrast, which provides anatomic detail and localization. Studies have demonstrated, however, that MRI may have limited utility in delineating the full tumor extent precisely. Studies suggest that MR spectroscopy (MRS) can also be used to distinguish high-grade from low-grade gliomas. However, due to operator dependent variables and the heterogeneous nature of gliomas, the potential for error in diagnostic accuracy with MRS is a concern. Positron emission tomography (PET) imaging with (11)C-methionine (MET) and (18)F-fluorodeoxyglucose (FDG) has been shown to add additional information with respect to tumor grade, extent, and prognosis based on the premise of biochemical changes preceding anatomic changes. Combined PET/MRS is a technique that integrates information from PET in guiding the location for the most accurate metabolic characterization of a lesion via MRS. We describe a case of glioblastoma multiforme in which MRS was initially non-diagnostic for malignancy, but when MRS was repeated with PET guidance, demonstrated elevated choline/N-acetylaspartate (Cho/NAA) ratio in the right parietal mass consistent with a high-grade malignancy. Stereotactic biopsy, followed by PET image-guided resection, confirmed the diagnosis of grade IV GBM. To our knowledge, this is the first reported case of an integrated PET/MRS technique for the voxel placement of MRS. Our findings suggest that integrated PET/MRS may potentially improve diagnostic accuracy in high-grade gliomas.

  13. Controlled time integration for the numerical simulation of meteor radar reflections

    NASA Astrophysics Data System (ADS)

    Räbinä, Jukka; Mönkölä, Sanna; Rossi, Tuomo; Markkanen, Johannes; Gritsevich, Maria; Muinonen, Karri

    2016-07-01

    We model meteoroids entering the Earth's atmosphere as objects surrounded by non-magnetized plasma, and consider efficient numerical simulation of radar reflections from meteors in the time domain. Instead of the widely used finite difference time domain method (FDTD), we use more generalized finite differences by applying the discrete exterior calculus (DEC) and non-uniform leapfrog-style time discretization. The computational domain is presented by convex polyhedral elements. The convergence of the time integration is accelerated by the exact controllability method. The numerical experiments show that our code is efficiently parallelized. The DEC approach is compared to the volume integral equation (VIE) method by numerical experiments. The result is that both methods are competitive in modelling non-magnetized plasma scattering. For demonstrating the simulation capabilities of the DEC approach, we present numerical experiments of radar reflections and vary parameters in a wide range.

  14. A comparison of the efficiency of numerical methods for integrating chemical kinetic rate equations

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    1984-01-01

    The efficiency of several algorithms used for numerical integration of stiff ordinary differential equations was compared. The methods examined included two general purpose codes EPISODE and LSODE and three codes (CHEMEQ, CREK1D and GCKP84) developed specifically to integrate chemical kinetic rate equations. The codes were applied to two test problems drawn from combustion kinetics. The comparisons show that LSODE is the fastest code available for the integration of combustion kinetic rate equations. It is shown that an iterative solution of the algebraic energy conservation equation to compute the temperature can be more efficient then evaluating the temperature by integrating its time-derivative.

  15. Orbit determination based on meteor observations using numerical integration of equations of motion

    NASA Astrophysics Data System (ADS)

    Dmitriev, Vasily; Lupovka, Valery; Gritsevich, Maria

    2015-11-01

    Recently, there has been a worldwide proliferation of instruments and networks dedicated to observing meteors, including airborne and future space-based monitoring systems . There has been a corresponding rapid rise in high quality data accumulating annually. In this paper, we present a method embodied in the open-source software program "Meteor Toolkit", which can effectively and accurately process these data in an automated mode and discover the pre-impact orbit and possibly the origin or parent body of a meteoroid or asteroid. The required input parameters are the topocentric pre-atmospheric velocity vector and the coordinates of the atmospheric entry point of the meteoroid, i.e. the beginning point of visual path of a meteor, in an Earth centered-Earth fixed coordinate system, the International Terrestrial Reference Frame (ITRF). Our method is based on strict coordinate transformation from the ITRF to an inertial reference frame and on numerical integration of the equations of motion for a perturbed two-body problem. Basic accelerations perturbing a meteoroid's orbit and their influence on the orbital elements are also studied and demonstrated. Our method is then compared with several published studies that utilized variations of a traditional analytical technique, the zenith attraction method, which corrects for the direction of the meteor's trajectory and its apparent velocity due to Earth's gravity. We then demonstrate the proposed technique on new observational data obtained from the Finnish Fireball Network (FFN) as well as on simulated data. In addition, we propose a method of analysis of error propagation, based on general rule of covariance transformation.

  16. Feasibility study of the numerical integration of shell equations using the field method

    NASA Technical Reports Server (NTRS)

    Cohen, G. A.

    1973-01-01

    The field method is developed for arbitrary open branch domains subjected to general linear boundary conditions. Although closed branches are within the scope of the method, they are not treated here. The numerical feasibility of the method has been demonstrated by implementing it in a computer program for the linear static analysis of open branch shells of revolution under asymmetric loads. For such problems the field method eliminates the well-known numerical problem of long subintervals associated with the rapid growth of extraneous solutions. Also, the method appears to execute significantly faster than other numerical integration methods.

  17. Integrated Post-Experiment Monoisotopic Mass Refinement: An Integrated Approach to Accurately Assign Monoisotopic Precursor Masses to Tandem Mass Spectrometric Data

    SciTech Connect

    Jung, Hee-Jung; Purvine, Samuel O.; Kim, Hokeun; Petyuk, Vladislav A.; Hyung, Seok-Won; Monroe, Matthew E.; Mun, Dong-Gi; Kim, Kyong-Chul; Park, Jong-Moon; Kim, Su-Jin; Tolic, Nikola; Slysz, Gordon W.; Moore, Ronald J.; Zhao, Rui; Adkins, Joshua N.; Anderson, Gordon A.; Lee, Hookeun; Camp, David G.; Yu, Myeong-Hee; Smith, Richard D.; Lee, Sang-Won

    2010-10-15

    Accurate assignment of monoisotopic precursor masses to tandem mass spectrometric (MS/MS) data is a fundamental and critically important step for successful peptide identifications in mass spectrometry based proteomics. Here we describe an integrated approach that combines three previously reported methods of treating MS/MS data for precursor mass refinement. This combined method, “integrated Post-Experiment Monoisotopic Mass Refinement” (iPE MMR), integrates steps: 1) generation of refined MS/MS data by DeconMSn, 2) additional refinement of the resultant MS/MS data by a modified version of PE-MMR, and 3) elimination of systematic errors of precursor masses using DtaRefinery. iPE-MMR is the first method that utilizes all MS information from multiple MS scans of a precursor ion and multiple charge states of it in an MS scan to determine precursor mass. By combining the synergistic features of each of method, iPE MMR increases sensitivity in peptide identification and provides increased accuracy when applied to complex high-throughput proteomics data. iPE MMR also allows incorporating additional data processing step(s) or skipping step(s), if necessary, to enable new developments or applications of the tools, as each step of iPE MMR produces output data in a common and conventional format used in proteomics data processing.

  18. Integration of multi-modality imaging for accurate 3D reconstruction of human coronary arteries in vivo

    NASA Astrophysics Data System (ADS)

    Giannoglou, George D.; Chatzizisis, Yiannis S.; Sianos, George; Tsikaderis, Dimitrios; Matakos, Antonis; Koutkias, Vassilios; Diamantopoulos, Panagiotis; Maglaveras, Nicos; Parcharidis, George E.; Louridas, George E.

    2006-12-01

    In conventional intravascular ultrasound (IVUS)-based three-dimensional (3D) reconstruction of human coronary arteries, IVUS images are arranged linearly generating a straight vessel volume. However, with this approach real vessel curvature is neglected. To overcome this limitation an imaging method was developed based on integration of IVUS and biplane coronary angiography (BCA). In 17 coronary arteries from nine patients, IVUS and BCA were performed. From each angiographic projection, a single end-diastolic frame was selected and in each frame the IVUS catheter was interactively detected for the extraction of 3D catheter path. Ultrasound data was obtained with a sheath-based catheter and recorded on S-VHS videotape. S-VHS data was digitized and lumen and media-adventitia contours were semi-automatically detected in end-diastolic IVUS images. Each pair of contours was aligned perpendicularly to the catheter path and rotated in space by implementing an algorithm based on Frenet-Serret rules. Lumen and media-adventitia contours were interpolated through generation of intermediate contours creating a real 3D lumen and vessel volume, respectively. The absolute orientation of the reconstructed lumen was determined by back-projecting it onto both angiographic planes and comparing the projected lumen with the actual angiographic lumen. In conclusion, our method is capable of performing rapid and accurate 3D reconstruction of human coronary arteries in vivo. This technique can be utilized for reliable plaque morphometric, geometrical and hemodynamic analyses.

  19. Accurate path integral molecular dynamics simulation of ab-initio water at near-zero added cost

    NASA Astrophysics Data System (ADS)

    Elton, Daniel; Fritz, Michelle; Soler, José; Fernandez-Serra, Marivi

    It is now established that nuclear quantum motion plays an important role in determining water's structure and dynamics. These effects are important to consider when evaluating DFT functionals and attempting to develop better ones for water. The standard way of treating nuclear quantum effects, path integral molecular dynamics (PIMD), multiplies the number of energy/force calculations by the number of beads, which is typically 32. Here we introduce a method whereby PIMD can be incorporated into a DFT molecular dynamics simulation at virtually zero cost. The method is based on the cluster (many body) expansion of the energy. We first subtract the DFT monomer energies, using a custom DFT-based monomer potential energy surface. The evolution of the PIMD beads is then performed using only the more-accurate Partridge-Schwenke monomer energy surface. The DFT calculations are done using the centroid positions. Various bead thermostats can be employed to speed up the sampling of the quantum ensemble. The method bears some resemblance to multiple timestep algorithms and other schemes used to speed up PIMD with classical force fields. We show that our method correctly captures some of key effects of nuclear quantum motion on both the structure and dynamics of water. We acknowledge support from DOE Award No. DE-FG02-09ER16052 (D.E.) and DOE Early Career Award No. DE-SC0003871 (M.V.F.S.).

  20. Abstract Applets: A Method for Integrating Numerical Problem Solving into the Undergraduate Physics Curriculum

    SciTech Connect

    Peskin, Michael E

    2003-02-13

    In upper-division undergraduate physics courses, it is desirable to give numerical problem-solving exercises integrated naturally into weekly problem sets. I explain a method for doing this that makes use of the built-in class structure of the Java programming language. I also supply a Java class library that can assist instructors in writing programs of this type.

  1. An efficient exponential time integration method for the numerical solution of the shallow water equations on the sphere

    NASA Astrophysics Data System (ADS)

    Gaudreault, Stéphane; Pudykiewicz, Janusz A.

    2016-10-01

    The exponential propagation methods were applied in the past for accurate integration of the shallow water equations on the sphere. Despite obvious advantages related to the exact solution of the linear part of the system, their use for the solution of practical problems in geophysics has been limited because efficiency of the traditional algorithm for evaluating the exponential of Jacobian matrix is inadequate. In order to circumvent this limitation, we modify the existing scheme by using the Incomplete Orthogonalization Method instead of the Arnoldi iteration. We also propose a simple strategy to determine the initial size of the Krylov space using information from previous time instants. This strategy is ideally suited for the integration of fluid equations where the structure of the system Jacobian does not change rapidly between the subsequent time steps. A series of standard numerical tests performed with the shallow water model on a geodesic icosahedral grid shows that the new scheme achieves efficiency comparable to the semi-implicit methods. This fact, combined with the accuracy and the mass conservation of the exponential propagation scheme, makes the presented method a good candidate for solving many practical problems, including numerical weather prediction.

  2. A novel stress-accurate FE technology for highly non-linear analysis with incompressibility constraint. Application to the numerical simulation of the FSW process

    NASA Astrophysics Data System (ADS)

    Chiumenti, M.; Cervera, M.; Agelet de Saracibar, C.; Dialami, N.

    2013-05-01

    In this work a novel finite element technology based on a three-field mixed formulation is presented. The Variational Multi Scale (VMS) method is used to circumvent the LBB stability condition allowing the use of linear piece-wise interpolations for displacement, stress and pressure fields, respectively. The result is an enhanced stress field approximation which enables for stress-accurate results in nonlinear computational mechanics. The use of an independent nodal variable for the pressure field allows for an adhoc treatment of the incompressibility constraint. This is a mandatory requirement due to the isochoric nature of the plastic strain in metal forming processes. The highly non-linear stress field typically encountered in the Friction Stir Welding (FSW) process is used as an example to show the performance of this new FE technology. The numerical simulation of the FSW process is tackled by means of an Arbitrary-Lagrangian-Eulerian (ALE) formulation. The computational domain is split into three different zones: the work.piece (defined by a rigid visco-plastic behaviour in the Eulerian framework), the pin (within the Lagrangian framework) and finally the stirzone (ALE formulation). A fully coupled thermo-mechanical analysis is introduced showing the heat fluxes generated by the plastic dissipation in the stir-zone (Sheppard rigid-viscoplastic constitutive model) as well as the frictional dissipation at the contact interface (Norton frictional contact model). Finally, tracers have been implemented to show the material flow around the pin allowing a better understanding of the welding mechanism. Numerical results are compared with experimental evidence.

  3. The Fourier transform method and the SD-bar approach for the analytical and numerical treatment of multicenter overlap-like quantum similarity integrals

    SciTech Connect

    Safouhi, Hassan . E-mail: hassan.safouhi@ualberta.ca; Berlu, Lilian

    2006-07-20

    Molecular overlap-like quantum similarity measurements imply the evaluation of overlap integrals of two molecular electronic densities related by Dirac delta function. When the electronic densities are expanded over atomic orbitals using the usual LCAO-MO approach (linear combination of atomic orbitals), overlap-like quantum similarity integrals could be expressed in terms of four-center overlap integrals. It is shown that by introducing the Fourier transform of delta Dirac function in the integrals and using the Fourier transform approach combined with the so-called B functions, one can obtain analytic expressions of the integrals under consideration. These analytic expressions involve highly oscillatory semi-infinite spherical Bessel functions, which are the principal source of severe numerical and computational difficulties. In this work, we present a highly efficient algorithm for a fast and accurate numerical evaluation of these multicenter overlap-like quantum similarity integrals over Slater type functions. This algorithm is based on the SD-bar approach due to Safouhi. Recurrence formulae are used for a better control of the degree of accuracy and for a better stability of the algorithm. The numerical result section shows the efficiency of our algorithm, compared with the alternatives using the one-center two-range expansion method, which led to very complicated analytic expressions, the epsilon algorithm and the nonlinear D-bar transformation.

  4. Conservation properties of numerical integration methods for systems of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Rosenbaum, J. S.

    1976-01-01

    If a system of ordinary differential equations represents a property conserving system that can be expressed linearly (e.g., conservation of mass), it is then desirable that the numerical integration method used conserve the same quantity. It is shown that both linear multistep methods and Runge-Kutta methods are 'conservative' and that Newton-type methods used to solve the implicit equations preserve the inherent conservation of the numerical method. It is further shown that a method used by several authors is not conservative.

  5. Melt-rock reaction in the asthenospheric mantle: Perspectives from high-order accurate numerical simulations in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Tirupathi, S.; Schiemenz, A. R.; Liang, Y.; Parmentier, E.; Hesthaven, J.

    2013-12-01

    The style and mode of melt migration in the mantle are important to the interpretation of basalts erupted on the surface. Both grain-scale diffuse porous flow and channelized melt migration have been proposed. To better understand the mechanisms and consequences of melt migration in a heterogeneous mantle, we have undertaken a numerical study of reactive dissolution in an upwelling and viscously deformable mantle where solubility of pyroxene increases upwards. Our setup is similar to that described in [1], except we use a larger domain size in 2D and 3D and a new numerical method. To enable efficient simulations in 3D through parallel computing, we developed a high-order accurate numerical method for the magma dynamics problem using discontinuous Galerkin methods and constructed the problem using the numerical library deal.II [2]. Linear stability analyses of the reactive dissolution problem reveal three dynamically distinct regimes [3] and the simulations reported in this study were run in the stable regime and the unstable wave regime where small perturbations in porosity grows periodically. The wave regime is more relevant to melt migration beneath the mid-ocean ridges but computationally more challenging. Extending the 2D simulations in the stable regime in [1] to 3D using various combinations of sustained perturbations in porosity at the base of the upwelling column (which may result from a viened mantle), we show the geometry and distribution of dunite channel and high-porosity melt channels are highly correlated with inflow perturbation through superposition. Strong nonlinear interactions among compaction, dissolution, and upwelling give rise to porosity waves and high-porosity melt channels in the wave regime. These compaction-dissolution waves have well organized but time-dependent structures in the lower part of the simulation domain. High-porosity melt channels nucleate along nodal lines of the porosity waves, growing downwards. The wavelength scales

  6. A comparison of the efficiency of numerical methods for integrating chemical kinetic rate equations

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    1984-01-01

    A comparison of the efficiency of several algorithms recently developed for the efficient numerical integration of stiff ordinary differential equations is presented. The methods examined include two general-purpose codes EPISODE and LSODE and three codes (CHEMEQ, CREK1D, and GCKP84) developed specifically to integrate chemical kinetic rate equations. The codes are applied to two test problems drawn from combustion kinetics. The comparisons show that LSODE is the fastest code currently available for the integration of combustion kinetic rate equations. An important finding is that an iterative solution of the algebraic energy conservation equation to compute the temperature can be more efficient than evaluating the temperature by integrating its time-derivative.

  7. Comparison of symbolic and numerical integration methods for an assumed-stress hybrid shell element

    NASA Technical Reports Server (NTRS)

    Rengarajan, Govind; Knight, Norman F., Jr.; Aminpour, Mohammad A.

    1993-01-01

    Hybrid shell elements have long been regarded with reserve by the commercial finite element developers despite the high degree of reliability and accuracy associated with such formulations. The fundamental reason is the inherent higher computational cost of the hybrid approach as compared to the displacement-based formulations. However, a noteworthy factor in favor of hybrid elements is that numerical integration to generate element matrices can be entirely avoided by the use of symbolic integration. In this paper, the use of the symbolic computational approach is presented for an assumed-stress hybrid shell element with drilling degrees of freedom and the significant time savings achieved is demonstrated through an example.

  8. Numerical evaluation of two-center integrals over Slater type orbitals

    NASA Astrophysics Data System (ADS)

    Kurt, S. A.; Yükçü, N.

    2016-03-01

    Slater Type Orbitals (STOs) which one of the types of exponential type orbitals (ETOs) are used usually as basis functions in the multicenter molecular integrals to better understand physical and chemical properties of matter. In this work, we develop algorithms for two-center overlap and two-center two-electron hybrid and Coulomb integrals which are calculated with help of translation method for STOs and some auxiliary functions by V. Magnasco's group. We use Mathematica programming language to produce algorithms for these calculations. Numerical results for some quantum numbers are presented in the tables. Consequently, we compare our obtained numerical results with the other known literature results and other details of evaluation method are discussed.

  9. Fourth-Order Method for Numerical Integration of Age- and Size-Structured Population Models

    SciTech Connect

    Iannelli, M; Kostova, T; Milner, F A

    2008-01-08

    In many applications of age- and size-structured population models, there is an interest in obtaining good approximations of total population numbers rather than of their densities. Therefore, it is reasonable in such cases to solve numerically not the PDE model equations themselves, but rather their integral equivalents. For this purpose quadrature formulae are used in place of the integrals. Because quadratures can be designed with any order of accuracy, one can obtain numerical approximations of the solutions with very fast convergence. In this article, we present a general framework and a specific example of a fourth-order method based on composite Newton-Cotes quadratures for a size-structured population model.

  10. Two step hybrid methods of 7th and 8th order for the numerical integration of second order IVPs

    NASA Astrophysics Data System (ADS)

    Kalogiratou, Z.; Monovasilis, Th.; Simos, T. E.

    2016-06-01

    In this work we consider the numerical integration of second order ODEs where the first derivative is missing. We construct two step hybrid methods with six and seven stages and seventh and eighth algebraic order. We apply the new methods on the numerical integration of several test problems.

  11. Physical and numerical sources of computational inefficiency in integration of chemical kinetic rate equations: Etiology, treatment and prognosis

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.; Radhakrishnan, K.

    1986-01-01

    The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.

  12. Modified Chebyshev Picard Iteration for Efficient Numerical Integration of Ordinary Differential Equations

    NASA Astrophysics Data System (ADS)

    Macomber, B.; Woollands, R. M.; Probe, A.; Younes, A.; Bai, X.; Junkins, J.

    2013-09-01

    Modified Chebyshev Picard Iteration (MCPI) is an iterative numerical method for approximating solutions of linear or non-linear Ordinary Differential Equations (ODEs) to obtain time histories of system state trajectories. Unlike other step-by-step differential equation solvers, the Runge-Kutta family of numerical integrators for example, MCPI approximates long arcs of the state trajectory with an iterative path approximation approach, and is ideally suited to parallel computation. Orthogonal Chebyshev Polynomials are used as basis functions during each path iteration; the integrations of the Picard iteration are then done analytically. Due to the orthogonality of the Chebyshev basis functions, the least square approximations are computed without matrix inversion; the coefficients are computed robustly from discrete inner products. As a consequence of discrete sampling and weighting adopted for the inner product definition, Runge phenomena errors are minimized near the ends of the approximation intervals. The MCPI algorithm utilizes a vector-matrix framework for computational efficiency. Additionally, all Chebyshev coefficients and integrand function evaluations are independent, meaning they can be simultaneously computed in parallel for further decreased computational cost. Over an order of magnitude speedup from traditional methods is achieved in serial processing, and an additional order of magnitude is achievable in parallel architectures. This paper presents a new MCPI library, a modular toolset designed to allow MCPI to be easily applied to a wide variety of ODE systems. Library users will not have to concern themselves with the underlying mathematics behind the MCPI method. Inputs are the boundary conditions of the dynamical system, the integrand function governing system behavior, and the desired time interval of integration, and the output is a time history of the system states over the interval of interest. Examples from the field of astrodynamics are

  13. Numerically stable approach for high-precision orbit integration using Encke's method and equinoctial elements

    NASA Astrophysics Data System (ADS)

    Ellmer, Matthias; Mayer-Gürr, Torsten

    2016-04-01

    Future gravity missions like GRACE-FO and beyond will deliver low-low satellite-to-satellite (ll-sst) ranging measurements of much increased precision. This necessitates a re-evaluation of the processes used in gravity field determination with an eye to numerical stability. When computing gravity fields from ll-sst data, precise positions of both satellites are needed in the setup of the observation equations. These positions thus have an immediate effect on the sought-after gravity field parameters. We use reduced-dynamic orbits which are computed through integration of all accelerations experienced by the satellite, as determined through a priori models and observed through the accelerometer. Our simulations showed that computing the orbit of the satellite through complete integration of all acting forces leads to numeric instabilities magnitudes larger than the expected ranging accuracy. We introduce a numerically stable approach employing a best-fit keplerian reference orbit based on Encke's method. Our investigations revealed that using canonical formulations for the evaluation of the reference keplerian orbit and accelerations lead to insufficient precision, necessitating an alternative formulation like the equinoctial elements.

  14. Numerical integration of the extended variable generalized Langevin equation with a positive Prony representable memory kernel.

    PubMed

    Baczewski, Andrew D; Bond, Stephen D

    2013-07-28

    Generalized Langevin dynamics (GLD) arise in the modeling of a number of systems, ranging from structured fluids that exhibit a viscoelastic mechanical response, to biological systems, and other media that exhibit anomalous diffusive phenomena. Molecular dynamics (MD) simulations that include GLD in conjunction with external and/or pairwise forces require the development of numerical integrators that are efficient, stable, and have known convergence properties. In this article, we derive a family of extended variable integrators for the Generalized Langevin equation with a positive Prony series memory kernel. Using stability and error analysis, we identify a superlative choice of parameters and implement the corresponding numerical algorithm in the LAMMPS MD software package. Salient features of the algorithm include exact conservation of the first and second moments of the equilibrium velocity distribution in some important cases, stable behavior in the limit of conventional Langevin dynamics, and the use of a convolution-free formalism that obviates the need for explicit storage of the time history of particle velocities. Capability is demonstrated with respect to accuracy in numerous canonical examples, stability in certain limits, and an exemplary application in which the effect of a harmonic confining potential is mapped onto a memory kernel.

  15. Numerical integration of the extended variable generalized Langevin equation with a positive Prony representable memory kernel

    NASA Astrophysics Data System (ADS)

    Baczewski, Andrew D.; Bond, Stephen D.

    2013-07-01

    Generalized Langevin dynamics (GLD) arise in the modeling of a number of systems, ranging from structured fluids that exhibit a viscoelastic mechanical response, to biological systems, and other media that exhibit anomalous diffusive phenomena. Molecular dynamics (MD) simulations that include GLD in conjunction with external and/or pairwise forces require the development of numerical integrators that are efficient, stable, and have known convergence properties. In this article, we derive a family of extended variable integrators for the Generalized Langevin equation with a positive Prony series memory kernel. Using stability and error analysis, we identify a superlative choice of parameters and implement the corresponding numerical algorithm in the LAMMPS MD software package. Salient features of the algorithm include exact conservation of the first and second moments of the equilibrium velocity distribution in some important cases, stable behavior in the limit of conventional Langevin dynamics, and the use of a convolution-free formalism that obviates the need for explicit storage of the time history of particle velocities. Capability is demonstrated with respect to accuracy in numerous canonical examples, stability in certain limits, and an exemplary application in which the effect of a harmonic confining potential is mapped onto a memory kernel.

  16. Numerical solution of random singular integral equation appearing in crack problems

    NASA Technical Reports Server (NTRS)

    Sambandham, M.; Srivatsan, T. S.; Bharucha-Reid, A. T.

    1986-01-01

    The solution of several elasticity problems, and particularly crack problems, can be reduced to the solution of one-dimensional singular integral equations with a Cauchy-type kernel or to a system of uncoupled singular integral equations. Here a method for the numerical solution of random singular integral equations of Cauchy type is presented. The solution technique involves a Chebyshev series approximation, the coefficients of which are the solutions of a system of random linear equations. This method is applied to the problem of periodic array of straight cracks inside an infinite isotropic elastic medium and subjected to a nonuniform pressure distribution along the crack edges. The statistical properties of the random solution are evaluated numerically, and the random solution is used to determine the values of the stress-intensity factors at the crack tips. The error, expressed as the difference between the mean of the random solution and the deterministic solution, is established. Values of stress-intensity factors at the crack tip for different random input functions are presented.

  17. Comparing numerical integration schemes for time-continuous car-following models

    NASA Astrophysics Data System (ADS)

    Treiber, Martin; Kanagaraj, Venkatesan

    2015-02-01

    When simulating trajectories by integrating time-continuous car-following models, standard integration schemes such as the fourth-order Runge-Kutta method (RK4) are rarely used while the simple Euler method is popular among researchers. We compare four explicit methods both analytically and numerically: Euler's method, ballistic update, Heun's method (trapezoidal rule), and the standard RK4. As performance metrics, we plot the global discretization error as a function of the numerical complexity. We tested the methods on several time-continuous car-following models in several multi-vehicle simulation scenarios with and without discontinuities such as stops or a discontinuous behavior of an external leader. We find that the theoretical advantage of RK4 (consistency order 4) only plays a role if both the acceleration function of the model and the trajectory of the leader are sufficiently often differentiable. Otherwise, we obtain lower (and often fractional) consistency orders. Although, to our knowledge, Heun's method has never been used for integrating car-following models, it turns out to be the best scheme for many practical situations. The ballistic update always prevails over Euler's method although both are of first order.

  18. Sensitivity of inelastic response to numerical integration of strain energy. [for cantilever beam

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.

    1976-01-01

    The exact solution to the quasi-static, inelastic response of a cantilever beam of rectangular cross section subjected to a bending moment at the tip is obtained. The material of the beam is assumed to be linearly elastic-linearly strain-hardening. This solution is then compared with three different numerical solutions of the same problem obtained by minimizing the total potential energy using Gaussian quadratures of two different orders and a Newton-Cotes scheme for integrating the strain energy of deformation. Significant differences between the exact dissipative strain energy and its numerical counterpart are emphasized. The consequence of this on the nonlinear transient responses of a beam with solid cross section and that of a thin-walled beam on elastic supports under impulsive loads are examined.

  19. CALL FOR PAPERS: Special Issue on `Geometric Numerical Integration of Differential Equations'

    NASA Astrophysics Data System (ADS)

    Quispel, G. R. W.; McLachlan, R. I.

    2005-02-01

    This is a call for contributions to a special issue of Journal of Physics A: Mathematical and General entitled `Geometric Numerical Integration of Differential Equations'. This issue should be a repository for high quality original work. We are interested in having the topic interpreted broadly, that is, to include contributions dealing with symplectic or multisymplectic integration; volume-preserving integration; symmetry-preserving integration; integrators that preserve first integrals, Lyapunov functions, or dissipation; exponential integrators; integrators for highly oscillatory systems; Lie-group integrators, etc. Papers on geometric integration of both ODEs and PDEs will be considered, as well as application to molecular-scale integration, celestial mechanics, particle accelerators, fluid flows, population models, epidemiological models and/or any other areas of science. We believe that this issue is timely, and hope that it will stimulate further development of this new and exciting field. The Editorial Board has invited G R W Quispel and R I McLachlan to serve as Guest Editors for the special issue. Their criteria for acceptance of contributions are the following: • The subject of the paper should relate to geometric numerical integration in the sense described above. • Contributions will be refereed and processed according to the usual procedure of the journal. • Papers should be original; reviews of a work published elsewhere will not be accepted. The guidelines for the preparation of contributions are as follows: • The DEADLINE for submission of contributions is 1 September 2005. This deadline will allow the special issue to appear in late 2005 or early 2006. • There is a strict page limit of 16 printed pages (approximately 9600 words) per contribution. For papers exceeding this limit, the Guest Editors reserve the right to request a reduction in length. Further advice on publishing your work in Journal of Physics A: Mathematical and General

  20. Numerical solution of two-dimensional integral-algebraic systems using Legendre functions

    NASA Astrophysics Data System (ADS)

    Nemati, S.; Lima, P.; Ordokhani, Y.

    2012-09-01

    We consider a method for computing approximate solutions to systems of two-dimensional Volterra integral equations. The approximate solution is sought in the form of a linear combination of two-variable shifted Legendre functions. The operational matrices technique is used to reduce the problem to a system of linear algebraic equations. Some numerical tests have been carried out and the results show that this method has a good performance, even in the case when the system matrix is singular in the whole considered domain.

  1. Direct numerical solution of the transonic perturbation integral equation for lifting and nonlifting airfoils

    NASA Technical Reports Server (NTRS)

    Nixon, D.

    1978-01-01

    The linear transonic perturbation integral equation previously derived for nonlifting airfoils is formulated for lifting cases. In order to treat shock wave motions, a strained coordinate system is used in which the shock location is invariant. The tangency boundary conditions are either formulated using the thin airfoil approximation or by using the analytic continuation concept. A direct numerical solution to this equation is derived in contrast to the iterative scheme initially used, and results of both lifting and nonlifting examples indicate that the method is satisfactory.

  2. Numerical Modeling of Pressurization of Cryogenic Propellant Tank for Integrated Vehicle Fluid System

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok K.; LeClair, Andre C.; Hedayat, Ali

    2016-01-01

    This paper presents a numerical model of pressurization of a cryogenic propellant tank for the Integrated Vehicle Fluid (IVF) system using the Generalized Fluid System Simulation Program (GFSSP). The IVF propulsion system, being developed by United Launch Alliance, uses boiloff propellants to drive thrusters for the reaction control system as well as to run internal combustion engines to develop power and drive compressors to pressurize propellant tanks. NASA Marshall Space Flight Center (MSFC) has been running tests to verify the functioning of the IVF system using a flight tank. GFSSP, a finite volume based flow network analysis software developed at MSFC, has been used to develop an integrated model of the tank and the pressurization system. This paper presents an iterative algorithm for converging the interface boundary conditions between different component models of a large system model. The model results have been compared with test data.

  3. The strategy for numerical solving of PIES without explicit calculation of singular integrals in 2D potential problems

    NASA Astrophysics Data System (ADS)

    Szerszeń, Krzysztof; Zieniuk, Eugeniusz

    2016-06-01

    The paper presents a strategy for numerical solving of parametric integral equation system (PIES) for 2D potential problems without explicit calculation of singular integrals. The values of these integrals will be expressed indirectly in terms of easy to compute non-singular integrals. The effectiveness of the proposed strategy is investigated with the example of potential problem modeled by the Laplace equation. The strategy simplifies the structure of the program with good the accuracy of the obtained solutions.

  4. Numerical evaluation of a fixed-amplitude variable-phase integral.

    SciTech Connect

    Lyness, J. N.; Mathematics and Computer Science

    2008-01-01

    We treat the evaluation of a fixed-amplitude variable-phase integral of the form {integral}{sub a}{sup b} exp[ikG(x)]dx, where G{prime}(x) {ge} 0 and has moderate differentiability in the integration interval. In particular, we treat in detail the case in which G{prime}(a) = G{prime}(b) = 0, but G{double_prime}(a)G{double_prime}(b) < 0. For this, we re-derive a standard asymptotic expansion in inverse half-integer inverse powers of k. This derivation is direct, making no explicit appeal to the theories of stationary phase or steepest descent. It provides straightforward expressions for the coefficients in the expansion in terms of derivatives of G at the end-points. Thus it can be used to evaluate the integrals numerically in cases where k is large. We indicate the generalizations to the theory required to cover cases where the oscillator function G has higher order zeros at either or both end-points, but this is not treated in detail. In the simpler case in which G{prime}(a)G{prime}(b) > 0, the same approach would recover a special case of a recent result due to Iserles and Norsett.

  5. Comparison of numerical techniques for integration of stiff ordinary differential equations arising in combustion chemistry

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    1984-01-01

    The efficiency and accuracy of several algorithms recently developed for the efficient numerical integration of stiff ordinary differential equations are compared. The methods examined include two general-purpose codes, EPISODE and LSODE, and three codes (CHEMEQ, CREK1D, and GCKP84) developed specifically to integrate chemical kinetic rate equations. The codes are applied to two test problems drawn from combustion kinetics. The comparisons show that LSODE is the fastest code currently available for the integration of combustion kinetic rate equations. An important finding is that an interactive solution of the algebraic energy conservation equation to compute the temperature does not result in significant errors. In addition, this method is more efficient than evaluating the temperature by integrating its time derivative. Significant reductions in computational work are realized by updating the rate constants (k = at(supra N) N exp(-E/RT) only when the temperature change exceeds an amount delta T that is problem dependent. An approximate expression for the automatic evaluation of delta T is derived and is shown to result in increased efficiency.

  6. Integrated computation of finite-time Lyapunov exponent fields during direct numerical simulation of unsteady flows.

    PubMed

    Finn, Justin; Apte, Sourabh V

    2013-03-01

    The computation of Lagrangian coherent structures typically involves post-processing of experimentally or numerically obtained fluid velocity fields to obtain the largest finite-time Lyapunov exponent (FTLE) field. However, this procedure can be tedious for large-scale complex flows of general interest. In this work, an alternative approach involving computation of the FTLE on-the-fly during direct numerical simulation of the full three dimensional Navier-Stokes equations is developed. The implementation relies on Lagrangian particle tracking to compose forward time flow maps, and an Eulerian treatment of the backward time flow map [S. Leung, J. Comput. Phys. 230, 3500-3524 (2011)] coupled with a semi-Lagrangian advection scheme. The flow maps are accurately constructed from a sequence of smaller sub-steps stored on disk [S. Brunton and C. Rowley, Chaos 20, 017503 (2010)], resulting in low CPU and memory requirements to compute evolving FTLE fields. Several examples are presented to demonstrate the capability and parallel scalability of the approach for a variety of two and three dimensional flows.

  7. Integrated computation of finite-time Lyapunov exponent fields during direct numerical simulation of unsteady flows

    NASA Astrophysics Data System (ADS)

    Finn, Justin; Apte, Sourabh V.

    2013-03-01

    The computation of Lagrangian coherent structures typically involves post-processing of experimentally or numerically obtained fluid velocity fields to obtain the largest finite-time Lyapunov exponent (FTLE) field. However, this procedure can be tedious for large-scale complex flows of general interest. In this work, an alternative approach involving computation of the FTLE on-the-fly during direct numerical simulation of the full three dimensional Navier-Stokes equations is developed. The implementation relies on Lagrangian particle tracking to compose forward time flow maps, and an Eulerian treatment of the backward time flow map [S. Leung, J. Comput. Phys. 230, 3500-3524 (2011)] coupled with a semi-Lagrangian advection scheme. The flow maps are accurately constructed from a sequence of smaller sub-steps stored on disk [S. Brunton and C. Rowley, Chaos 20, 017503 (2010)], resulting in low CPU and memory requirements to compute evolving FTLE fields. Several examples are presented to demonstrate the capability and parallel scalability of the approach for a variety of two and three dimensional flows.

  8. Families of third and fourth algebraic order trigonometrically fitted symplectic methods for the numerical integration of Hamiltonian systems

    NASA Astrophysics Data System (ADS)

    Monovasilis, Th.; Kalogiratou, Z.; Simos, T. E.

    2007-11-01

    The numerical integration of Hamiltonian systems by symplectic and trigonometrically fitted (TF) symplectic method is considered in this work. We construct new trigonometrically fitted symplectic methods of third and fourth order. We apply our new methods as well as other existing methods to the numerical integration of the harmonic oscillator, the 2D harmonic oscillator with an integer frequency ratio and an orbit problem studied by Stiefel and Bettis.

  9. Numerical simulation of Stokes flow around particles via a hybrid Finite Difference-Boundary Integral scheme

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Amitabh

    2013-11-01

    An efficient algorithm for simulating Stokes flow around particles is presented here, in which a second order Finite Difference method (FDM) is coupled to a Boundary Integral method (BIM). This method utilizes the strong points of FDM (i.e. localized stencil) and BIM (i.e. accurate representation of particle surface). Specifically, in each iteration, the flow field away from the particles is solved on a Cartesian FDM grid, while the traction on the particle surface (given the the velocity of the particle) is solved using BIM. The two schemes are coupled by matching the solution in an intermediate region between the particle and surrounding fluid. We validate this method by solving for flow around an array of cylinders, and find good agreement with Hasimoto's (J. Fluid Mech. 1959) analytical results.

  10. Numerically stable formulas for a particle-based explicit exponential integrator

    NASA Astrophysics Data System (ADS)

    Nadukandi, Prashanth

    2015-05-01

    Numerically stable formulas are presented for the closed-form analytical solution of the X-IVAS scheme in 3D. This scheme is a state-of-the-art particle-based explicit exponential integrator developed for the particle finite element method. Algebraically, this scheme involves two steps: (1) the solution of tangent curves for piecewise linear vector fields defined on simplicial meshes and (2) the solution of line integrals of piecewise linear vector-valued functions along these tangent curves. Hence, the stable formulas presented here have general applicability, e.g. exact integration of trajectories in particle-based (Lagrangian-type) methods, flow visualization and computer graphics. The Newton form of the polynomial interpolation definition is used to express exponential functions of matrices which appear in the analytical solution of the X-IVAS scheme. The divided difference coefficients in these expressions are defined in a piecewise manner, i.e. in a prescribed neighbourhood of removable singularities their series approximations are computed. An optimal series approximation of divided differences is presented which plays a critical role in this methodology. At least ten significant decimal digits in the formula computations are guaranteed to be exact using double-precision floating-point arithmetic. The worst case scenarios occur in the neighbourhood of removable singularities found in fourth-order divided differences of the exponential function.

  11. Numerical integration of periodic orbits in the main problem of artificial satellite theory

    NASA Astrophysics Data System (ADS)

    Broucke, R. A.

    1994-02-01

    We describe a collection of results obtained by numerical integration of orbits in the main problem of artificial satellite theory (the J2 problem). The periodic orbits have been classified according to their stability and the Poincare surfaces of section computed for different values of J2 and H (where H is the z-component of angular momentum). The problem was scaled down to a fixed value (-1/2) of the energy constant. It is found that the pseudo-circular periodic solution plays a fundamental role. They are the equivalent of the Poincare first-kind solutions in the three-body problem. The integration of the variational equations shows that these pseudo-circular solutions are stable, except in a very narrow band near the critical inclination. This results in a sequence of bifurcations near the critical inclination, refining therefore some known results on the critical inclination, for instance by Izsak (1963), Jupp (1975, 1980) and Cushman (1983). We also verify that the double pitchfork bifurcation around the critical inclination exists for large values of J2, as large as absolute value of J2 = 0.2. Other secondary (higher-order) bifurcations are also described. The equations of motion were integrated in rotating meridian coordinates.

  12. Algebraic stabilization of explicit numerical integration for extremely stiff reaction networks

    NASA Astrophysics Data System (ADS)

    Guidry, Mike

    2012-06-01

    In contrast to the prevailing view in the literature, it is shown that even extremely stiff sets of ordinary differential equations may be solved efficiently by explicit methods if limiting algebraic solutions are used to stabilize the numerical integration. The stabilizing algebra differs essentially for systems well-removed from equilibrium and those near equilibrium. Explicit asymptotic and quasi-steady-state methods that are appropriate when the system is only weakly equilibrated are examined first. These methods are then extended to the case of close approach to equilibrium through a new implementation of partial equilibrium approximations. Using stringent tests with astrophysical thermonuclear networks, evidence is provided that these methods can deal with the stiffest networks, even in the approach to equilibrium, with accuracy and integration timestepping comparable to that of implicit methods. Because explicit methods can execute a timestep faster and scale more favorably with network size than implicit algorithms, our results suggest that algebraically-stabilized explicit methods might enable integration of larger reaction networks coupled to fluid dynamics than has been feasible previously for a variety of disciplines.

  13. Numerical analysis of composite STEEL-CONCRETE SECTIONS using integral equation of Volterra

    NASA Astrophysics Data System (ADS)

    Partov, Doncho; Kantchev, Vesselin

    2011-09-01

    The paper presents analysis of the stress and deflections changes due to creep in statically determinate composite steel-concrete beam. The mathematical model involves the equation of equilibrium, compatibility and constitutive relationship, i.e. an elastic law for the steel part and an integral-type creep law of Boltzmann — Volterra for the concrete part. On the basis of the theory of the viscoelastic body of Arutyunian-Trost-Bažant for determining the redistribution of stresses in beam section between concrete plate and steel beam with respect to time "t", two independent Volterra integral equations of the second kind have been derived. Numerical method based on linear approximation of the singular kernal function in the integral equation is presented. Example with the model proposed is investigated. The creep functions is suggested by the model CEB MC90-99 and the "ACI 209R-92 model. The elastic modulus of concrete E c (t) is assumed to be constant in time `t'. The obtained results from the both models are compared.

  14. Numerical analysis of composite STEEL-CONCRETE SECTIONS using integral equation of Volterra

    NASA Astrophysics Data System (ADS)

    Partov, Doncho; Kantchev, Vesselin

    2011-09-01

    The paper presents analysis of the stress and deflections changes due to creep in statically determinate composite steel-concrete beam. The mathematical model involves the equation of equilibrium, compatibility and constitutive relationship, i.e. an elastic law for the steel part and an integral-type creep law of Boltzmann — Volterra for the concrete part. On the basis of the theory of the viscoelastic body of Arutyunian-Trost-Bažant for determining the redistribution of stresses in beam section between concrete plate and steel beam with respect to time "t", two independent Volterra integral equations of the second kind have been derived. Numerical method based on linear approximation of the singular kernal function in the integral equation is presented. Example with the model proposed is investigated. The creep functions is suggested by the model CEB MC90-99 and the "ACI 209R-92 model. The elastic modulus of concrete Ec(t) is assumed to be constant in time `t'. The obtained results from the both models are compared.

  15. Regularization of Motion Equations with L-Transformation and Numerical Integration of the Regular Equations

    NASA Astrophysics Data System (ADS)

    Poleshchikov, Sergei M.

    2003-04-01

    The sets of L-matrices of the second, fourth and eighth orders are constructed axiomatically. The defining relations are taken from the regularization of motion equations for Keplerian problem. In particular, the Levi-Civita matrix and KS-matrix are L-matrices of second and fourth order, respectively. A theorem on the ranks of L-transformations of different orders is proved. The notion of L-similarity transformation is introduced, certain sets of L-matrices are constructed, and their classification is given. An application of fourth order L-matrices for N-body problem regularization is given. A method of correction for regular coordinates in the Runge-Kutta-Fehlberg integration method for regular motion equations of a perturbed two-body problem is suggested. Comparison is given for the results of numerical integration in the problem of defining the orbit of a satellite, with and without the above correction method. The comparison is carried out with respect to the number of calls to the subroutine evaluating the perturbational accelerations vector. The results of integration using the correction turn out to be in a favorable position.

  16. Integration of a silicon-based microprobe into a gear measuring instrument for accurate measurement of micro gears

    NASA Astrophysics Data System (ADS)

    Ferreira, N.; Krah, T.; Jeong, D. C.; Metz, D.; Kniel, K.; Dietzel, A.; Büttgenbach, S.; Härtig, F.

    2014-06-01

    The integration of silicon micro probing systems into conventional gear measuring instruments (GMIs) allows fully automated measurements of external involute micro spur gears of normal modules smaller than 1 mm. This system, based on a silicon microprobe, has been developed and manufactured at the Institute for Microtechnology of the Technische Universität Braunschweig. The microprobe consists of a silicon sensor element and a stylus which is oriented perpendicularly to the sensor. The sensor is fabricated by means of silicon bulk micromachining. Its small dimensions of 6.5 mm × 6.5 mm allow compact mounting in a cartridge to facilitate the integration into a GMI. In this way, tactile measurements of 3D microstructures can be realized. To enable three-dimensional measurements with marginal forces, four Wheatstone bridges are built with diffused piezoresistors on the membrane of the sensor. On the reverse of the membrane, the stylus is glued perpendicularly to the sensor on a boss to transmit the probing forces to the sensor element during measurements. Sphere diameters smaller than 300 µm and shaft lengths of 5 mm as well as measurement forces from 10 µN enable the measurements of 3D microstructures. Such micro probing systems can be integrated into universal coordinate measuring machines and also into GMIs to extend their field of application. Practical measurements were carried out at the Physikalisch-Technische Bundesanstalt by qualifying the microprobes on a calibrated reference sphere to determine their sensitivity and their physical dimensions in volume. Following that, profile and helix measurements were carried out on a gear measurement standard with a module of 1 mm. The comparison of the measurements shows good agreement between the measurement values and the calibrated values. This result is a promising basis for the realization of smaller probe diameters for the tactile measurement of micro gears with smaller modules.

  17. Direct hot slumping and accurate integration process to manufacture prototypal x-ray optical units made of glass

    NASA Astrophysics Data System (ADS)

    Civitani, M.; Ghigo, M.; Basso, S.; Proserpio, L.; Spiga, D.; Salmaso, B.; Pareschi, G.; Tagliaferri, G.; Burwitz, V.; Hartner, G.; Menz, B.; Bavdaz, M.; Wille, E.

    2013-09-01

    X-ray telescopes with very large collecting area, like the proposed International X-ray Observatory (IXO, with around 3 m2 at 1 keV), need to be composed of a large number high quality mirror segments, aiming at achieving an angular resolution better than 5 arcsec HEW (Half-Energy-Width). A possible technology to manufacture the modular elements that will compose the entire optical module, named X-ray Optical Units (XOUs), consists of stacking in Wolter-I configuration several layers of thin foils of borosilicate glass, previously formed by hot slumping. The XOUs are subsequently assembled to form complete multi-shell optics with Wolter-I geometry. The achievable global angular resolution of the optic relies on the required surface shape accuracy of slumped foils, on the smoothness of the mirror surfaces and on the correct integration and co-alignment of the mirror segments. The Brera Astronomical Observatory (INAF-OAB) is leading a study, supported by ESA, concerning the implementation of the IXO telescopes based on thin slumped glass foils. In addition to the opto-mechanical design, the study foresees the development of a direct hot slumping thin glass foils production technology. Moreover, an innovative assembly concept making use of Wolter-I counter-form moulds and glass reinforcing ribs is under development. The ribs connect pairs of consecutive foils in an XOU stack, playing a structural and a functional role. In fact, as the ribs constrain the foil profile to the correct shape during the bonding, they damp the low-frequency profile errors still present on the foil after slumping. A dedicated semirobotic Integration MAchine (IMA) has been realized to this scope and used to build a few integrated prototypes made of several layers of slumped plates. In this paper we provide an overview of the project, we report the results achieved so far, including full illumination intra-focus X-ray tests of the last integrated prototype that are compliant with a HEW of

  18. Inelastic, nonlinear analysis of stiffened shells of revolution by numerical integration

    NASA Technical Reports Server (NTRS)

    Levine, H. S.; Svalbonas, V.

    1974-01-01

    This paper describes the latest addition to the STARS system of computer programs, STARS-2P, for the plastic, large deflection analysis of axisymmetrically loaded shells of revolution. The STARS system uses a numerical integration scheme to solve the governing differential equations. Several unique features for shell of revolution programs that are included in the STARS-2P program are described. These include orthotropic nonlinear kinematic hardening theory, a variety of shell wall cross sections and discrete ring stiffeners, cyclic and nonproportional mechanical and thermal loading capability, the coupled axisymmetric large deflection elasto-plastic torsion problem, an extensive restart option, arbitrary branching capability, and the provision for the inelastic treatment of smeared stiffeners, isogrid, and waffle wall constructions. To affirm the validity of the results, comparisons with available theoretical and experimental data are presented.

  19. An integrated data-directed numerical method for estimating the undiscovered mineral endowment in a region

    USGS Publications Warehouse

    McCammon, R.B.; Finch, W.I.; Kork, J.O.; Bridges, N.J.

    1994-01-01

    An integrated data-directed numerical method has been developed to estimate the undiscovered mineral endowment within a given area. The method has been used to estimate the undiscovered uranium endowment in the San Juan Basin, New Mexico, U.S.A. The favorability of uranium concentration was evaluated in each of 2,068 cells defined within the Basin. Favorability was based on the correlated similarity of the geologic characteristics of each cell to the geologic characteristics of five area-related deposit models. Estimates of the undiscovered endowment for each cell were categorized according to deposit type, depth, and cutoff grade. The method can be applied to any mineral or energy commodity provided that the data collected reflect discovered endowment. ?? 1994 Oxford University Press.

  20. An integrated numerical and physical modeling system for an enhanced in situ bioremediation process.

    PubMed

    Huang, Y F; Huang, G H; Wang, G Q; Lin, Q G; Chakma, A

    2006-12-01

    Groundwater contamination due to releases of petroleum products is a major environmental concern in many urban districts and industrial zones. Over the past years, a few studies were undertaken to address in situ bioremediation processes coupled with contaminant transport in two- or three-dimensional domains. However, they were concentrated on natural attenuation processes for petroleum contaminants or enhanced in situ bioremediation processes in laboratory columns. In this study, an integrated numerical and physical modeling system is developed for simulating an enhanced in situ biodegradation (EISB) process coupled with three-dimensional multiphase multicomponent flow and transport simulation in a multi-dimensional pilot-scale physical model. The designed pilot-scale physical model is effective in tackling natural attenuation and EISB processes for site remediation. The simulation results demonstrate that the developed system is effective in modeling the EISB process, and can thus be used for investigating the effects of various uncertainties.

  1. A Numerical Method for Integrating the Kinetic Equation of Coalescence and Breakup of Cloud Droplets.

    NASA Astrophysics Data System (ADS)

    Enukashvily, Isaac M.

    1980-11-01

    An extension of Bleck' method and of the method of moments is developed for the numerical integration of the kinetic equation of coalescence and breakup of cloud droplets. The number density function nk(x,t) in each separate cloud droplet packet between droplet mass grid points (xk,xk+1) is represented by an expansion in orthogonal polynomials with a given weighting function wk(x,k). The expansion coefficients describe the deviations of nk(x,t) from wk(x,k). In this way droplet number concentrations, liquid water contents and other moments in each droplet packet are conserved, and the problem of solving the kinetic equation is replaced by one of solving a set of coupled differential equations for the moments of the number density function nk(x,t). Equations for these moments in each droplet packet are derived. The method is tested against existing solutions of the coalescence equation. Numerical results are obtained when Bleck's uniform distribution hypothesis for nk(x,t) and Golovin's asymptotic solution of the coalescence equation is chosen for the, weighting function wk(x, k). A comparison between numerical results computed by Bleck's method and by the method of this study is made. It is shown that for the correct computation of the coalescence and breakup interactions between cloud droplet packets it is very important that the, approximation of the nk(x,t) between grid points (xk,xk+1) satisfies the conservation conditions for the number concentration, liquid water content and other moments of the cloud droplet packets. If these conservation conditions are provided, even the quasi-linear approximation of the nk(x,t) in comparison with Berry's six-point interpolation will give reasonable results which are very close to the existing analytic solutions.

  2. A multiple hypotheses uncertainty analysis in hydrological modelling: about model structure, landscape parameterization, and numerical integration

    NASA Astrophysics Data System (ADS)

    Pilz, Tobias; Francke, Till; Bronstert, Axel

    2016-04-01

    Until today a large number of competing computer models has been developed to understand hydrological processes and to simulate and predict streamflow dynamics of rivers. This is primarily the result of a lack of a unified theory in catchment hydrology due to insufficient process understanding and uncertainties related to model development and application. Therefore, the goal of this study is to analyze the uncertainty structure of a process-based hydrological catchment model employing a multiple hypotheses approach. The study focuses on three major problems that have received only little attention in previous investigations. First, to estimate the impact of model structural uncertainty by employing several alternative representations for each simulated process. Second, explore the influence of landscape discretization and parameterization from multiple datasets and user decisions. Third, employ several numerical solvers for the integration of the governing ordinary differential equations to study the effect on simulation results. The generated ensemble of model hypotheses is then analyzed and the three sources of uncertainty compared against each other. To ensure consistency and comparability all model structures and numerical solvers are implemented within a single simulation environment. First results suggest that the selection of a sophisticated numerical solver for the differential equations positively affects simulation outcomes. However, already some simple and easy to implement explicit methods perform surprisingly well and need less computational efforts than more advanced but time consuming implicit techniques. There is general evidence that ambiguous and subjective user decisions form a major source of uncertainty and can greatly influence model development and application at all stages.

  3. Robust numerical method for integration of point-vortex trajectories in two dimensions.

    PubMed

    Smith, Spencer A; Boghosian, Bruce M

    2011-05-01

    The venerable two-dimensional (2D) point-vortex model plays an important role as a simplified version of many disparate physical systems, including superfluids, Bose-Einstein condensates, certain plasma configurations, and inviscid turbulence. This system is also a veritable mathematical playground, touching upon many different disciplines from topology to dynamic systems theory. Point-vortex dynamics are described by a relatively simple system of nonlinear ordinary differential equations which can easily be integrated numerically using an appropriate adaptive time stepping method. As the separation between a pair of vortices relative to all other intervortex length scales decreases, however, the computational time required diverges. Accuracy is usually the most discouraging casualty when trying to account for such vortex motion, though the varying energy of this ostensibly Hamiltonian system is a potentially more serious problem. We solve these problems by a series of coordinate transformations: We first transform to action-angle coordinates, which, to lowest order, treat the close pair as a single vortex amongst all others with an internal degree of freedom. We next, and most importantly, apply Lie transform perturbation theory to remove the higher-order correction terms in succession. The overall transformation drastically increases the numerical efficiency and ensures that the total energy remains constant to high accuracy.

  4. Accurate determination of the Gibbs energy of Cu-Zr melts using the thermodynamic integration method in Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Harvey, J.-P.; Gheribi, A. E.; Chartrand, P.

    2011-08-01

    The design of multicomponent alloys used in different applications based on specific thermo-physical properties determined experimentally or predicted from theoretical calculations is of major importance in many engineering applications. A procedure based on Monte Carlo simulations (MCS) and the thermodynamic integration (TI) method to improve the quality of the predicted thermodynamic properties calculated from classical thermodynamic calculations is presented in this study. The Gibbs energy function of the liquid phase of the Cu-Zr system at 1800 K has been determined based on this approach. The internal structure of Cu-Zr melts and amorphous alloys at different temperatures, as well as other physical properties were also obtained from MCS in which the phase trajectory was modeled by the modified embedded atom model formalism. A rigorous comparison between available experimental data and simulated thermo-physical properties obtained from our MCS is presented in this work. The modified quasichemical model in the pair approximation was parameterized using the internal structure data obtained from our MCS and the precise Gibbs energy function calculated at 1800 K from the TI method. The predicted activity of copper in Cu-Zr melts at 1499 K obtained from our thermodynamic optimization was corroborated by experimental data found in the literature. The validity of the amplitude of the entropy of mixing obtained from the in silico procedure presented in this work was analyzed based on the thermodynamic description of hard sphere mixtures.

  5. Numerous Numerals.

    ERIC Educational Resources Information Center

    Henle, James M.

    This pamphlet consists of 17 brief chapters, each containing a discussion of a numeration system and a set of problems on the use of that system. The numeration systems used include Egyptian fractions, ordinary continued fractions and variants of that method, and systems using positive and negative bases. The book is informal and addressed to…

  6. Evidence that bisphenol A (BPA) can be accurately measured without contamination in human serum and urine, and that BPA causes numerous hazards from multiple routes of exposure.

    PubMed

    vom Saal, Frederick S; Welshons, Wade V

    2014-12-01

    There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources.

  7. Evidence that bisphenol A (BPA) can be accurately measured without contamination in human serum and urine, and that BPA causes numerous hazards from multiple routes of exposure

    PubMed Central

    vom Saal, Frederick S.; Welshons, Wade V.

    2016-01-01

    There is extensive evidence that bisphenol A (BPA) is related to a wide range of adverse health effects based on both human and experimental animal studies. However, a number of regulatory agencies have ignored all hazard findings. Reports of high levels of unconjugated (bioactive) serum BPA in dozens of human biomonitoring studies have also been rejected based on the prediction that the findings are due to assay contamination and that virtually all ingested BPA is rapidly converted to inactive metabolites. NIH and industry-sponsored round robin studies have demonstrated that serum BPA can be accurately assayed without contamination, while the FDA lab has acknowledged uncontrolled assay contamination. In reviewing the published BPA biomonitoring data, we find that assay contamination is, in fact, well controlled in most labs, and cannot be used as the basis for discounting evidence that significant and virtually continuous exposure to BPA must be occurring from multiple sources. PMID:25304273

  8. Application of variational principles and adjoint integrating factors for constructing numerical GFD models

    NASA Astrophysics Data System (ADS)

    Penenko, Vladimir; Tsvetova, Elena; Penenko, Alexey

    2015-04-01

    The proposed method is considered on an example of hydrothermodynamics and atmospheric chemistry models [1,2]. In the development of the existing methods for constructing numerical schemes possessing the properties of total approximation for operators of multiscale process models, we have developed a new variational technique, which uses the concept of adjoint integrating factors. The technique is as follows. First, a basic functional of the variational principle (the integral identity that unites the model equations, initial and boundary conditions) is transformed using Lagrange's identity and the second Green's formula. As a result, the action of the operators of main problem in the space of state functions is transferred to the adjoint operators defined in the space of sufficiently smooth adjoint functions. By the choice of adjoint functions the order of the derivatives becomes lower by one than those in the original equations. We obtain a set of new balance relationships that take into account the sources and boundary conditions. Next, we introduce the decomposition of the model domain into a set of finite volumes. For multi-dimensional non-stationary problems, this technique is applied in the framework of the variational principle and schemes of decomposition and splitting on the set of physical processes for each coordinate directions successively at each time step. For each direction within the finite volume, the analytical solutions of one-dimensional homogeneous adjoint equations are constructed. In this case, the solutions of adjoint equations serve as integrating factors. The results are the hybrid discrete-analytical schemes. They have the properties of stability, approximation and unconditional monotony for convection-diffusion operators. These schemes are discrete in time and analytic in the spatial variables. They are exact in case of piecewise-constant coefficients within the finite volume and along the coordinate lines of the grid area in each

  9. Integrative structural annotation of de novo RNA-Seq provides an accurate reference gene set of the enormous genome of the onion (Allium cepa L.).

    PubMed

    Kim, Seungill; Kim, Myung-Shin; Kim, Yong-Min; Yeom, Seon-In; Cheong, Kyeongchae; Kim, Ki-Tae; Jeon, Jongbum; Kim, Sunggil; Kim, Do-Sun; Sohn, Seong-Han; Lee, Yong-Hwan; Choi, Doil

    2015-02-01

    The onion (Allium cepa L.) is one of the most widely cultivated and consumed vegetable crops in the world. Although a considerable amount of onion transcriptome data has been deposited into public databases, the sequences of the protein-coding genes are not accurate enough to be used, owing to non-coding sequences intermixed with the coding sequences. We generated a high-quality, annotated onion transcriptome from de novo sequence assembly and intensive structural annotation using the integrated structural gene annotation pipeline (ISGAP), which identified 54,165 protein-coding genes among 165,179 assembled transcripts totalling 203.0 Mb by eliminating the intron sequences. ISGAP performed reliable annotation, recognizing accurate gene structures based on reference proteins, and ab initio gene models of the assembled transcripts. Integrative functional annotation and gene-based SNP analysis revealed a whole biological repertoire of genes and transcriptomic variation in the onion. The method developed in this study provides a powerful tool for the construction of reference gene sets for organisms based solely on de novo transcriptome data. Furthermore, the reference genes and their variation described here for the onion represent essential tools for molecular breeding and gene cloning in Allium spp.

  10. Integrating GPS, GYRO, vehicle speed sensor, and digital map to provide accurate and real-time position in an intelligent navigation system

    NASA Astrophysics Data System (ADS)

    Li, Qingquan; Fang, Zhixiang; Li, Hanwu; Xiao, Hui

    2005-10-01

    The global positioning system (GPS) has become the most extensively used positioning and navigation tool in the world. Applications of GPS abound in surveying, mapping, transportation, agriculture, military planning, GIS, and the geosciences. However, the positional and elevation accuracy of any given GPS location is prone to error, due to a number of factors. The applications of Global Positioning System (GPS) positioning is more and more popular, especially the intelligent navigation system which relies on GPS and Dead Reckoning technology is developing quickly for future huge market in China. In this paper a practical combined positioning model of GPS/DR/MM is put forward, which integrates GPS, Gyro, Vehicle Speed Sensor (VSS) and digital navigation maps to provide accurate and real-time position for intelligent navigation system. This model is designed for automotive navigation system making use of Kalman filter to improve position and map matching veracity by means of filtering raw GPS and DR signals, and then map-matching technology is used to provide map coordinates for map displaying. In practical examples, for illustrating the validity of the model, several experiments and their results of integrated GPS/DR positioning in intelligent navigation system will be shown for the conclusion that Kalman Filter based GPS/DR integrating position approach is necessary, feasible and efficient for intelligent navigation application. Certainly, this combined positioning model, similar to other model, can not resolve all situation issues. Finally, some suggestions are given for further improving integrated GPS/DR/MM application.

  11. Integrating metabolic performance, thermal tolerance, and plasticity enables for more accurate predictions on species vulnerability to acute and chronic effects of global warming.

    PubMed

    Magozzi, Sarah; Calosi, Piero

    2015-01-01

    Predicting species vulnerability to global warming requires a comprehensive, mechanistic understanding of sublethal and lethal thermal tolerances. To date, however, most studies investigating species physiological responses to increasing temperature have focused on the underlying physiological traits of either acute or chronic tolerance in isolation. Here we propose an integrative, synthetic approach including the investigation of multiple physiological traits (metabolic performance and thermal tolerance), and their plasticity, to provide more accurate and balanced predictions on species and assemblage vulnerability to both acute and chronic effects of global warming. We applied this approach to more accurately elucidate relative species vulnerability to warming within an assemblage of six caridean prawns occurring in the same geographic, hence macroclimatic, region, but living in different thermal habitats. Prawns were exposed to four incubation temperatures (10, 15, 20 and 25 °C) for 7 days, their metabolic rates and upper thermal limits were measured, and plasticity was calculated according to the concept of Reaction Norms, as well as Q10 for metabolism. Compared to species occupying narrower/more stable thermal niches, species inhabiting broader/more variable thermal environments (including the invasive Palaemon macrodactylus) are likely to be less vulnerable to extreme acute thermal events as a result of their higher upper thermal limits. Nevertheless, they may be at greater risk from chronic exposure to warming due to the greater metabolic costs they incur. Indeed, a trade-off between acute and chronic tolerance was apparent in the assemblage investigated. However, the invasive species P. macrodactylus represents an exception to this pattern, showing elevated thermal limits and plasticity of these limits, as well as a high metabolic control. In general, integrating multiple proxies for species physiological acute and chronic responses to increasing

  12. Integrating laboratory creep compaction data with numerical fault models: A Bayesian framework

    USGS Publications Warehouse

    Fitzenz, D.D.; Jalobeanu, A.; Hickman, S.H.

    2007-01-01

    We developed a robust Bayesian inversion scheme to plan and analyze laboratory creep compaction experiments. We chose a simple creep law that features the main parameters of interest when trying to identify rate-controlling mechanisms from experimental data. By integrating the chosen creep law or an approximation thereof, one can use all the data, either simultaneously or in overlapping subsets, thus making more complete use of the experiment data and propagating statistical variations in the data through to the final rate constants. Despite the nonlinearity of the problem, with this technique one can retrieve accurate estimates of both the stress exponent and the activation energy, even when the porosity time series data are noisy. Whereas adding observation points and/or experiments reduces the uncertainty on all parameters, enlarging the range of temperature or effective stress significantly reduces the covariance between stress exponent and activation energy. We apply this methodology to hydrothermal creep compaction data on quartz to obtain a quantitative, semiempirical law for fault zone compaction in the interseismic period. Incorporating this law into a simple direct rupture model, we find marginal distributions of the time to failure that are robust with respect to errors in the initial fault zone porosity. Copyright 2007 by the American Geophysical Union.

  13. Numerical Simulation of Natural Convection of a Nanofluid in an Inclined Heated Enclosure Using Two-Phase Lattice Boltzmann Method: Accurate Effects of Thermophoresis and Brownian Forces.

    PubMed

    Ahmed, Mahmoud; Eslamian, Morteza

    2015-12-01

    Laminar natural convection in differentially heated (β = 0°, where β is the inclination angle), inclined (β = 30° and 60°), and bottom-heated (β = 90°) square enclosures filled with a nanofluid is investigated, using a two-phase lattice Boltzmann simulation approach. The effects of the inclination angle on Nu number and convection heat transfer coefficient are studied. The effects of thermophoresis and Brownian forces which create a relative drift or slip velocity between the particles and the base fluid are included in the simulation. The effect of thermophoresis is considered using an accurate and quantitative formula proposed by the authors. Some of the existing results on natural convection are erroneous due to using wrong thermophoresis models or simply ignoring the effect. Here we show that thermophoresis has a considerable effect on heat transfer augmentation in laminar natural convection. Our non-homogenous modeling approach shows that heat transfer in nanofluids is a function of the inclination angle and Ra number. It also reveals some details of flow behavior which cannot be captured by single-phase models. The minimum heat transfer rate is associated with β = 90° (bottom-heated) and the maximum heat transfer rate occurs in an inclination angle which varies with the Ra number.

  14. Numerical Simulation of Natural Convection of a Nanofluid in an Inclined Heated Enclosure Using Two-Phase Lattice Boltzmann Method: Accurate Effects of Thermophoresis and Brownian Forces.

    PubMed

    Ahmed, Mahmoud; Eslamian, Morteza

    2015-12-01

    Laminar natural convection in differentially heated (β = 0°, where β is the inclination angle), inclined (β = 30° and 60°), and bottom-heated (β = 90°) square enclosures filled with a nanofluid is investigated, using a two-phase lattice Boltzmann simulation approach. The effects of the inclination angle on Nu number and convection heat transfer coefficient are studied. The effects of thermophoresis and Brownian forces which create a relative drift or slip velocity between the particles and the base fluid are included in the simulation. The effect of thermophoresis is considered using an accurate and quantitative formula proposed by the authors. Some of the existing results on natural convection are erroneous due to using wrong thermophoresis models or simply ignoring the effect. Here we show that thermophoresis has a considerable effect on heat transfer augmentation in laminar natural convection. Our non-homogenous modeling approach shows that heat transfer in nanofluids is a function of the inclination angle and Ra number. It also reveals some details of flow behavior which cannot be captured by single-phase models. The minimum heat transfer rate is associated with β = 90° (bottom-heated) and the maximum heat transfer rate occurs in an inclination angle which varies with the Ra number. PMID:26183389

  15. Integration of Genetic and Phenotypic Data in 48 Lineages of Philippine Birds Shows Heterogeneous Divergence Processes and Numerous Cryptic Species.

    PubMed

    Campbell, Kyle K; Braile, Thomas; Winker, Kevin

    2016-01-01

    The Philippine Islands are one of the most biologically diverse archipelagoes in the world. Current taxonomy, however, may underestimate levels of avian diversity and endemism in these islands. Although species limits can be difficult to determine among allopatric populations, quantitative methods for comparing phenotypic and genotypic data can provide useful metrics of divergence among populations and identify those that merit consideration for elevation to full species status. Using a conceptual approach that integrates genetic and phenotypic data, we compared populations among 48 species, estimating genetic divergence (p-distance) using the mtDNA marker ND2 and comparing plumage and morphometrics of museum study skins. Using conservative speciation thresholds, pairwise comparisons of genetic and phenotypic divergence suggested possible species-level divergences in more than half of the species studied (25 out of 48). In speciation process space, divergence routes were heterogeneous among taxa. Nearly all populations that surpassed high genotypic divergence thresholds were Passeriformes, and non-Passeriformes populations surpassed high phenotypic divergence thresholds more commonly than expected by chance. Overall, there was an apparent logarithmic increase in phenotypic divergence with respect to genetic divergence, suggesting the possibility that divergence among these lineages may initially be driven by divergent selection in this allopatric system. Also, genetic endemism was high among sampled islands. Higher taxonomy affected divergence in genotype and phenotype. Although broader lineage, genetic, phenotypic, and numeric sampling is needed to further explore heterogeneity among divergence processes and to accurately assess species-level diversity in these taxa, our results support the need for substantial taxonomic revisions among Philippine birds. The conservation implications are profound.

  16. Integration of Genetic and Phenotypic Data in 48 Lineages of Philippine Birds Shows Heterogeneous Divergence Processes and Numerous Cryptic Species

    PubMed Central

    Campbell, Kyle K.; Braile, Thomas

    2016-01-01

    The Philippine Islands are one of the most biologically diverse archipelagoes in the world. Current taxonomy, however, may underestimate levels of avian diversity and endemism in these islands. Although species limits can be difficult to determine among allopatric populations, quantitative methods for comparing phenotypic and genotypic data can provide useful metrics of divergence among populations and identify those that merit consideration for elevation to full species status. Using a conceptual approach that integrates genetic and phenotypic data, we compared populations among 48 species, estimating genetic divergence (p-distance) using the mtDNA marker ND2 and comparing plumage and morphometrics of museum study skins. Using conservative speciation thresholds, pairwise comparisons of genetic and phenotypic divergence suggested possible species-level divergences in more than half of the species studied (25 out of 48). In speciation process space, divergence routes were heterogeneous among taxa. Nearly all populations that surpassed high genotypic divergence thresholds were Passeriformes, and non-Passeriformes populations surpassed high phenotypic divergence thresholds more commonly than expected by chance. Overall, there was an apparent logarithmic increase in phenotypic divergence with respect to genetic divergence, suggesting the possibility that divergence among these lineages may initially be driven by divergent selection in this allopatric system. Also, genetic endemism was high among sampled islands. Higher taxonomy affected divergence in genotype and phenotype. Although broader lineage, genetic, phenotypic, and numeric sampling is needed to further explore heterogeneity among divergence processes and to accurately assess species-level diversity in these taxa, our results support the need for substantial taxonomic revisions among Philippine birds. The conservation implications are profound. PMID:27442510

  17. Integration of Genetic and Phenotypic Data in 48 Lineages of Philippine Birds Shows Heterogeneous Divergence Processes and Numerous Cryptic Species.

    PubMed

    Campbell, Kyle K; Braile, Thomas; Winker, Kevin

    2016-01-01

    The Philippine Islands are one of the most biologically diverse archipelagoes in the world. Current taxonomy, however, may underestimate levels of avian diversity and endemism in these islands. Although species limits can be difficult to determine among allopatric populations, quantitative methods for comparing phenotypic and genotypic data can provide useful metrics of divergence among populations and identify those that merit consideration for elevation to full species status. Using a conceptual approach that integrates genetic and phenotypic data, we compared populations among 48 species, estimating genetic divergence (p-distance) using the mtDNA marker ND2 and comparing plumage and morphometrics of museum study skins. Using conservative speciation thresholds, pairwise comparisons of genetic and phenotypic divergence suggested possible species-level divergences in more than half of the species studied (25 out of 48). In speciation process space, divergence routes were heterogeneous among taxa. Nearly all populations that surpassed high genotypic divergence thresholds were Passeriformes, and non-Passeriformes populations surpassed high phenotypic divergence thresholds more commonly than expected by chance. Overall, there was an apparent logarithmic increase in phenotypic divergence with respect to genetic divergence, suggesting the possibility that divergence among these lineages may initially be driven by divergent selection in this allopatric system. Also, genetic endemism was high among sampled islands. Higher taxonomy affected divergence in genotype and phenotype. Although broader lineage, genetic, phenotypic, and numeric sampling is needed to further explore heterogeneity among divergence processes and to accurately assess species-level diversity in these taxa, our results support the need for substantial taxonomic revisions among Philippine birds. The conservation implications are profound. PMID:27442510

  18. Science-Based Approach for Advancing Marine and Hydrokinetic Energy: Integrating Numerical Simulations with Experiments

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, F.; Kang, S.; Chamorro, L. P.; Hill, C.

    2011-12-01

    The field of MHK energy is still in its infancy lagging approximately a decade or more behind the technology and development progress made in wind energy engineering. Marine environments are characterized by complex topography and three-dimensional (3D) turbulent flows, which can greatly affect the performance and structural integrity of MHK devices and impact the Levelized Cost of Energy (LCoE). Since the deployment of multi-turbine arrays is envisioned for field applications, turbine-to-turbine interactions and turbine-bathymetry interactions need to be understood and properly modeled so that MHK arrays can be optimized on a site specific basis. Furthermore, turbulence induced by MHK turbines alters and interacts with the nearby ecosystem and could potentially impact aquatic habitats. Increased turbulence in the wake of MHK devices can also change the shear stress imposed on the bed ultimately affecting the sediment transport and suspension processes in the wake of these structures. Such effects, however, remain today largely unexplored. In this work a science-based approach integrating state-of-the-art experimentation with high-resolution computational fluid dynamics is proposed as a powerful strategy for optimizing the performance of MHK devices and assessing environmental impacts. A novel numerical framework is developed for carrying out Large-Eddy Simulation (LES) in arbitrarily complex domains with embedded MHK devices. The model is able to resolve the geometrical complexity of real-life MHK devices using the Curvilinear Immersed Boundary (CURVIB) method along with a wall model for handling the flow near solid surfaces. Calculations are carried out for an axial flow hydrokinetic turbine mounted on the bed of rectangular open channel on a grid with nearly 200 million grid nodes. The approach flow corresponds to fully developed turbulent open channel flow and is obtained from a separate LES calculation. The specific case corresponds to that studied

  19. Predicting geomorphic evolution through integration of numerical-model scenarios and topographic/bathymetric-survey updates

    NASA Astrophysics Data System (ADS)

    Plant, N. G.; Long, J.; Dalyander, S.; Thompson, D.; Miselis, J. L.

    2013-12-01

    Natural resource and hazard management of barrier islands requires an understanding of geomorphic changes associated with long-term processes and storms. Uncertainty exists in understanding how long-term processes interact with the geomorphic changes caused by storms and the resulting perturbations of the long-term evolution trajectories. We use high-resolution data sets to initialize and correct high-fidelity numerical simulations of oceanographic forcing and resulting barrier island evolution. We simulate two years of observed storms to determine the individual and cumulative impacts of these events. Results are separated into cross-shore and alongshore components of sediment transport and compared with observed topographic and bathymetric changes during these time periods. The discrete island change induced by these storms is integrated with previous knowledge of long-term net alongshore sediment transport to project island evolution. The approach has been developed and tested using data collected at the Chandeleur Island chain off the coast of Louisiana (USA). The simulation time period included impacts from tropical and winter storms, as well as a human-induced perturbation associated with construction of a sand berm along the island shoreline. The predictions and observations indicated that storm and long-term processes both contribute to the migration, lowering, and disintegration of the artificial berm and natural island. Further analysis will determine the relative importance of cross-shore and alongshore sediment transport processes and the dominant time scales that drive each of these processes and subsequent island morphologic response.

  20. An integrated meso-scale numerical model of melting and solidification in laser welding

    NASA Astrophysics Data System (ADS)

    Duggan, G.; Tong, M.; Browne, D. J.

    2012-01-01

    The authors present an integrated numerical model for the simulation of laser spot welding of an aluminium alloy at meso-scale in 2D. This model deals with the melting of the parent materials which form the weld pool and the subsequent solidification of the liquid metal in the pool, during the welding process. The melting of the parent materials due to the applied heating power is an important phenomenon, which determines the conditions at the onset of solidification, such as the geometry of the weld pool and the distribution of the temperature field. An enthalpy method is employed to predict the melting during the heating phase of welding. A Gaussian distribution is used to model the heat input from the laser. Once the laser beam is switched off and the melting halts, solidification commences. The UCD front tracking model [1,2] for alloy solidification is applied to predict the advancement of the columnar dendritic front, and a volume-averaging formulation is used to simulate nucleation and growth of equiaxed dendrites. A mechanical blocking criterion is used to define dendrite coherency, and the columnar-to-equiaxed transition within the weld pool is predicted.

  1. Analysis of Numerical Mesoscale Model Data for Wind Integration Studies in the United States

    NASA Astrophysics Data System (ADS)

    Elliott, D.; Schwartz, M. N.; Lew, D.; Corbus, D.; Scott, G.; Haymes, S.; Wan, Y.

    2009-12-01

    The Western Wind and Solar Integration Study (WWSIS) and the Eastern Wind Integration and Transmission Study (EWITS) are enhancing energy security by defining operating impacts due to large penetrations of renewable energy. The backbones of these studies are the large and consistent wind speed and power production data sets valid at 80 m and/or 100 m above ground derived from numerical mesoscale models for the years 2004-2006 and aggregated into wind power plants. The horizontal and temporal resolution of the data is 2 km and 10 minutes, respectively. The WWSIS data set was produced by 3TIER and the EWITS data set was produced by AWS Truewind under contract to the National Renewable Energy Laboratory (NREL). These data sets, which are available at http://www.nrel.gov/wind/integrationdatasets/, were designed for spatial and temporal comparison of sites for geographic diversity and load correlation and to provide estimates of power production from hypothetical wind plants. These data sets do not depict all possible wind plant sites nor should the data be used as the sole basis of project investment. NREL has performed a quality control check on the annual wind speed and power parameters and will conduct a detailed validation of the seasonal, diurnal, and geographic distribution patterns of the model data. The purposes of the analysis are to identify any anomalies in the data, to assess the regional accuracy of the data, and if warranted, to modify the data sets. One conclusion from the quality control exercise is that there are many details such as spatial and temporal discontinuities in the model output produced during post simulation processing that need to be examined in addition to the overall accuracy of the data. In this paper, we will present the results of the analysis of the mesoscale model data used for the Western and Eastern United States integration studies. We will discuss the validation of the data sets, including comparisons with validated wind maps

  2. Exactification of the Poincaré asymptotic expansion of the Hankel integral: spectacularly accurate asymptotic expansions and non-asymptotic scales

    PubMed Central

    Galapon, Eric A.; Martinez, Kay Marie L.

    2014-01-01

    We obtain an exactification of the Poincaré asymptotic expansion (PAE) of the Hankel integral, as , using the distributional approach of McClure & Wong. We find that, for half-integer orders of the Bessel function, the exactified asymptotic series terminates, so that it gives an exact finite sum representation of the Hankel integral. For other orders, the asymptotic series does not terminate and is generally divergent, but is amenable to superasymptotic summation, i.e. by optimal truncation. For specific examples, we compare the accuracy of the optimally truncated asymptotic series owing to the McClure–Wong distributional method with owing to the Mellin–Barnes integral method. We find that the former is spectacularly more accurate than the latter, by, in some cases, more than 70 orders of magnitude for the same moderate value of b. Moreover, the exactification can lead to a resummation of the PAE when it is exact, with the resummed Poincaré series exhibiting again the same spectacular accuracy. More importantly, the distributional method may yield meaningful resummations that involve scales that are not asymptotic sequences. PMID:24511252

  3. Sull'Integrazione delle Strutture Numeriche nella Scuola dell'Obbligo (Integrating Numerical Structures in Mandatory School).

    ERIC Educational Resources Information Center

    Bonotto, C.

    1995-01-01

    Attempted to verify knowledge regarding decimal and rational numbers in children ages 10-14. Discusses how pupils can receive and assimilate extensions of the number system from natural numbers to decimals and fractions and later can integrate this extension into a single and coherent numerical structure. (Author/MKR)

  4. Numerical methods for the simulation of complex multi-body flows with applications for the integrated Space Shuttle vehicle

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    1992-01-01

    The following papers are presented: (1) numerical methods for the simulation of complex multi-body flows with applications for the Integrated Space Shuttle vehicle; (2) a generalized scheme for 3-D hyperbolic grid generation; (3) collar grids for intersecting geometric components within the Chimera overlapped grid scheme; and (4) application of the Chimera overlapped grid scheme to simulation of Space Shuttle ascent flows.

  5. RLM-RACE, PPM-RACE, and qRT-PCR: an integrated strategy to accurately validate miRNA target genes.

    PubMed

    Wang, Chen; Fang, Jinggui

    2015-01-01

    MicroRNAs (miRNAs) are important regulators involved in most biological processes in eukarya. They play critical roles in growth, development, signal transduction, or stress response by controlling gene expression at the posttranscriptional level. Identification and characterization of miRNA-targeted mRNAs is essential for the analysis of miRNA functions. In plants, the perfect complementarity between most miRNAs and their targets enables the accurate predictions of their targets, while slicing of the targeted mRNAs facilitates target validation through RNA Ligase-Mediated (RLM)-Rapid Amplification of cDNA Ends (RACE) method. However, this method only determines the 5'-end of the cleavage product. To more accurately validate the predicted target genes of miRNAs and exactly determine the cleavage sites within the targets, an integrated strategy comprising RLM-RACE, Poly(A) Polymerase-Mediated (PPM)-RACE, and qRT-PCR was developed. The efficiency of this method is illustrated by the precise sequence validation of predicted target mRNAs of miRNAs in grapevine, citrus, peach, and other fruit crops. Our on-going research indicates that RLM-RACE, PPM-RACE, and qRT-PCR are very effective in the verification of sequences of miRNA targets obtained by Degradome sequencing. The protocol for RLM-RACE, PPM-RACE, and qRT-PCR is rapid, effective, cheap, and can be completed within 2-3 days.

  6. Building a Framework Earthquake Cycle Deformational Model for Subduction Megathrust Zones: Integrating Observations with Numerical Models

    NASA Astrophysics Data System (ADS)

    Furlong, Kevin P.; Govers, Rob; Herman, Matthew

    2016-04-01

    last for decades after a major event (e.g. Alaska 1964) We have integrated the observed patterns of upper-plate displacements (and deformation) with models of subduction zone evolution that allow us to incorporate both the transient behavior associated with post-earthquake viscous re-equilibration and the underlying long term, relatively constant elastic strain accumulation. Modeling the earthquake cycle through the use of a visco-elastic numerical model over numerous earthquake cycles, we have developed a framework model for the megathrust cycle that is constrained by observations made at a variety of plate boundary zones at different stages in their earthquake cycle (see paper by Govers et al., this meeting). Our results indicate that the observed patterns of co- and post- and inter-seismic deformation are largely controlled by interplay between elastic and viscous processes. Observed displacements represent the competition between steady elastic-strain accumulation driven by plate boundary coupling, and post-earthquake viscous behavior in response to the coseismic loading of the system by the rapid elastic rebound. The application of this framework model to observations from subduction zone observatories points up the dangers of simply extrapolating current deformation observations to the overall strain accumulation state of the subduction zoned allows us to develop improved assessments of the slip deficit accumulating within the seismogenic zone, and the near-future earthquake potential of different segments of the subduction plate boundary.

  7. Numerical evaluation of multi-loop integrals for arbitrary kinematics with SecDec 2.0

    NASA Astrophysics Data System (ADS)

    Borowka, Sophia; Carter, Jonathon; Heinrich, Gudrun

    2013-02-01

    We present the program SecDec 2.0, which contains various new features. First, it allows the numerical evaluation of multi-loop integrals with no restriction on the kinematics. Dimensionally regulated ultraviolet and infrared singularities are isolated via sector decomposition, while threshold singularities are handled by a deformation of the integration contour in the complex plane. As an application, we present numerical results for various massive two-loop four-point diagrams. SecDec 2.0 also contains new useful features for the calculation of more general parameter integrals, related for example to phase space integrals. Program summaryProgram title: SecDec 2.0 Catalogue identifier: AEIR_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIR_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 156829 No. of bytes in distributed program, including test data, etc.: 2137907 Distribution format: tar.gz Programming language: Wolfram Mathematica, Perl, Fortran/C++. Computer: From a single PC to a cluster, depending on the problem. Operating system: Unix, Linux. RAM: Depending on the complexity of the problem Classification: 4.4, 5, 11.1. Catalogue identifier of previous version: AEIR_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182(2011)1566 Does the new version supersede the previous version?: Yes Nature of problem: Extraction of ultraviolet and infrared singularities from parametric integrals appearing in higher order perturbative calculations in gauge theories. Numerical integration in the presence of integrable singularities (e.g., kinematic thresholds). Solution method: Algebraic extraction of singularities in dimensional regularization using iterated sector decomposition. This leads to a Laurent series in the dimensional regularization

  8. Integrated field and numerical modeling investigation of crustal flow mechanisms and trajectories in migmatite domes

    NASA Astrophysics Data System (ADS)

    Whitney, Donna; Teyssier, Christian; Rey, Patrice

    2016-04-01

    Integrated field-based and modeling studies provide information about the driving mechanisms and internal dynamics of migmatite domes, which are important structures for understanding the rheology of the lithosphere in orogens. Dome-forming processes range from extension (isostasy) driven flow to density (buoyancy) driven systems. Vertical flow (up or down) is on the scale of tens of km. End-member buoyancy-driven domes are typically Archean (e.g., Pilbara, Australia). Extension-driven systems include the migmatite domes in metamorphic core complexes of the northern North American Cordillera, as well as some domes in Variscan core complexes. The Entia dome of central Australia is a possible hybrid dome in which extension and density inversion were both involved in dome formation. The Entia is a "double dome", comprised of a steep high-strain zone bordered by high melt-fraction migmatite (subdomes). Field and numerical modeling studies show that these are characteristics of extension-driven domes, which form when flowing deep crust ascends beneath normal faults in the upper crust. Entia dome migmatite shows abundant evidence for extension, in addition to sequences of cascading, cuspate folds (well displayed in amphibolite) that are not present in the carapace of the dome, that do not have a consistent axial planar fabric, and that developed primarily at subsolidus conditions. We propose that these folds developed in mafic layers that had a density contrast with granodioritic migmatite, and that they formed during sinking of a denser layer above the rising migmatite subdomes. Extension-driven flow of partially molten (granodioritic) crust was therefore accompanied by sinking of a dense, mafic, mid-crustal layer, resulting in complex P-T-d paths of different lithologic units within the dome. This scenario is consistent with field and 2D modeling results, which together show how a combination of structural geology, metamorphic petrology, and modeling can illuminate the

  9. Evaluation of approximate relations for Delta /Q/ using a numerical solution of the Boltzmann equation. [collision integral

    NASA Technical Reports Server (NTRS)

    Nathenson, M.; Baganoff, D.; Yen, S. M.

    1974-01-01

    Data obtained from a numerical solution of the Boltzmann equation for shock-wave structure are used to test the accuracy of accepted approximate expressions for the two moments of the collision integral Delta (Q) for general intermolecular potentials in systems with a large translational nonequilibrium. The accuracy of the numerical scheme is established by comparison of the numerical results with exact expressions in the case of Maxwell molecules. They are then used in the case of hard-sphere molecules, which are the furthest-removed inverse power potential from the Maxwell molecule; and the accuracy of the approximate expressions in this domain is gauged. A number of approximate solutions are judged in this manner, and the general advantages of the numerical approach in itself are considered.

  10. BWR Full Integral Simulation Test (FIST) Program. TRAC-BWR model development. Volume 1. Numerical methods

    SciTech Connect

    Heck, C.L.; Andersen, J.G.M.

    1985-11-01

    A complete technical basis for implementation of the 3-D fast numerics in TRACB04 is presented. The 3-D fast numerics is a generalization of the predictor/corrector method previously developed for the 1-D components in TRACB. 20 figs.

  11. SINDA'85/FLUINT - SYSTEMS IMPROVED NUMERICAL DIFFERENCING ANALYZER AND FLUID INTEGRATOR (CONVEX VERSION)

    NASA Technical Reports Server (NTRS)

    Cullimore, B.

    1994-01-01

    SINDA, the Systems Improved Numerical Differencing Analyzer, is a software system for solving lumped parameter representations of physical problems governed by diffusion-type equations. SINDA was originally designed for analyzing thermal systems represented in electrical analog, lumped parameter form, although its use may be extended to include other classes of physical systems which can be modeled in this form. As a thermal analyzer, SINDA can handle such interrelated phenomena as sublimation, diffuse radiation within enclosures, transport delay effects, and sensitivity analysis. FLUINT, the FLUid INTegrator, is an advanced one-dimensional fluid analysis program that solves arbitrary fluid flow networks. The working fluids can be single phase vapor, single phase liquid, or two phase. The SINDA'85/FLUINT system permits the mutual influences of thermal and fluid problems to be analyzed. The SINDA system consists of a programming language, a preprocessor, and a subroutine library. The SINDA language is designed for working with lumped parameter representations and finite difference solution techniques. The preprocessor accepts programs written in the SINDA language and converts them into standard FORTRAN. The SINDA library consists of a large number of FORTRAN subroutines that perform a variety of commonly needed actions. The use of these subroutines can greatly reduce the programming effort required to solve many problems. A complete run of a SINDA'85/FLUINT model is a four step process. First, the user's desired model is run through the preprocessor which writes out data files for the processor to read and translates the user's program code. Second, the translated code is compiled. The third step requires linking the user's code with the processor library. Finally, the processor is executed. SINDA'85/FLUINT program features include 20,000 nodes, 100,000 conductors, 100 thermal submodels, and 10 fluid submodels. SINDA'85/FLUINT can also model two phase flow

  12. Brain Structural Integrity and Intrinsic Functional Connectivity Forecast 6 Year Longitudinal Growth in Children's Numerical Abilities

    PubMed Central

    Kochalka, John; Ngoon, Tricia J.; Wu, Sarah S.; Qin, Shaozheng; Battista, Christian

    2015-01-01

    Early numerical proficiency lays the foundation for acquiring quantitative skills essential in today's technological society. Identification of cognitive and brain markers associated with long-term growth of children's basic numerical computation abilities is therefore of utmost importance. Previous attempts to relate brain structure and function to numerical competency have focused on behavioral measures from a single time point. Thus, little is known about the brain predictors of individual differences in growth trajectories of numerical abilities. Using a longitudinal design, with multimodal imaging and machine-learning algorithms, we investigated whether brain structure and intrinsic connectivity in early childhood are predictive of 6 year outcomes in numerical abilities spanning childhood and adolescence. Gray matter volume at age 8 in distributed brain regions, including the ventrotemporal occipital cortex (VTOC), the posterior parietal cortex, and the prefrontal cortex, predicted longitudinal gains in numerical, but not reading, abilities. Remarkably, intrinsic connectivity analysis revealed that the strength of functional coupling among these regions also predicted gains in numerical abilities, providing novel evidence for a network of brain regions that works in concert to promote numerical skill acquisition. VTOC connectivity with posterior parietal, anterior temporal, and dorsolateral prefrontal cortices emerged as the most extensive network predicting individual gains in numerical abilities. Crucially, behavioral measures of mathematics, IQ, working memory, and reading did not predict children's gains in numerical abilities. Our study identifies, for the first time, functional circuits in the human brain that scaffold the development of numerical skills, and highlights potential biomarkers for identifying children at risk for learning difficulties. SIGNIFICANCE STATEMENT Children show substantial individual differences in math abilities and ease of math

  13. Assessing the bio-mitigation effect of integrated multi-trophic aquaculture on marine environment by a numerical approach.

    PubMed

    Zhang, Junbo; Kitazawa, Daisuke

    2016-09-15

    With increasing concern over the aquatic environment in marine culture, the integrated multi-trophic aquaculture (IMTA) has received extensive attention in recent years. A three-dimensional numerical ocean model is developed to explore the negative impacts of aquaculture wastes and assess the bio-mitigation effect of IMTA systems on marine environments. Numerical results showed that the concentration of surface phytoplankton could be controlled by planting seaweed (a maximum reduction of 30%), and the percentage change in the improvement of bottom dissolved oxygen concentration increased to 35% at maximum due to the ingestion of organic wastes by sea cucumbers. Numerical simulations indicate that seaweeds need to be harvested in a timely manner for maximal absorption of nutrients, and the initial stocking density of sea cucumbers >3.9 individuals m(-2) is preferred to further eliminate the organic wastes sinking down to the sea bottom.

  14. Numerical integral methods to study plasmonic modes in a photonic crystal waveguide with circular inclusions that involve a metamaterial

    NASA Astrophysics Data System (ADS)

    Mendoza-Suárez, A.; Pérez-Aguilar, H.

    2016-09-01

    We present several numerical integral methods for the study of a photonic crystal waveguide, formed by two parallel conducting plates and an array of circular inclusions involving a conducting material and a metamaterial. Band structures and reflectance were calculated, for infinite and finite photonic crystal waveguides, respectively. The numerical results obtained show that the numerical methods applied provide good accuracy and efficiency. An interesting detail that resulted from this study was the appearance of a propagating mode in a band gap due to defects in the middle of the photonic crystal waveguide. This is equivalent to dope a semiconductor to introduce allowed energy states within a band gap. Our main interest in this work is to model photonic crystal waveguides that involve left-handed materials (LHMs). For the specific LHM considered, a surface plasmon mode on the vacuum-LHM interface was found.

  15. Assessing the bio-mitigation effect of integrated multi-trophic aquaculture on marine environment by a numerical approach.

    PubMed

    Zhang, Junbo; Kitazawa, Daisuke

    2016-09-15

    With increasing concern over the aquatic environment in marine culture, the integrated multi-trophic aquaculture (IMTA) has received extensive attention in recent years. A three-dimensional numerical ocean model is developed to explore the negative impacts of aquaculture wastes and assess the bio-mitigation effect of IMTA systems on marine environments. Numerical results showed that the concentration of surface phytoplankton could be controlled by planting seaweed (a maximum reduction of 30%), and the percentage change in the improvement of bottom dissolved oxygen concentration increased to 35% at maximum due to the ingestion of organic wastes by sea cucumbers. Numerical simulations indicate that seaweeds need to be harvested in a timely manner for maximal absorption of nutrients, and the initial stocking density of sea cucumbers >3.9 individuals m(-2) is preferred to further eliminate the organic wastes sinking down to the sea bottom. PMID:27368928

  16. On time discretizations for spectral methods. [numerical integration of Fourier and Chebyshev methods for dynamic partial differential equations

    NASA Technical Reports Server (NTRS)

    Gottlieb, D.; Turkel, E.

    1980-01-01

    New methods are introduced for the time integration of the Fourier and Chebyshev methods of solution for dynamic differential equations. These methods are unconditionally stable, even though no matrix inversions are required. Time steps are chosen by accuracy requirements alone. For the Fourier method both leapfrog and Runge-Kutta methods are considered. For the Chebyshev method only Runge-Kutta schemes are tested. Numerical calculations are presented to verify the analytic results. Applications to the shallow water equations are presented.

  17. An integrated approach for non-periodic dynamic response prediction of complex structures: Numerical and experimental analysis

    NASA Astrophysics Data System (ADS)

    Rahneshin, Vahid; Chierichetti, Maria

    2016-09-01

    In this paper, a combined numerical and experimental method, called Extended Load Confluence Algorithm, is presented to accurately predict the dynamic response of non-periodic structures when little or no information about the applied loads is available. This approach, which falls into the category of Shape Sensing methods, inputs limited experimental information acquired from sensors to a mapping algorithm that predicts the response at unmeasured locations. The proposed algorithm consists of three major cores: an experimental core for data acquisition, a numerical core based on Finite Element Method for modeling the structure, and a mapping algorithm that improves the numerical model based on a modal approach in the frequency domain. The robustness and precision of the proposed algorithm are verified through numerical and experimental examples. The results of this paper demonstrate that without a precise knowledge of the loads acting on the structure, the dynamic behavior of the system can be predicted in an effective and precise manner after just a few iterations.

  18. Multiple piezo-patch energy harvesters integrated to a thin plate with AC-DC conversion: analytical modeling and numerical validation

    NASA Astrophysics Data System (ADS)

    Aghakhani, Amirreza; Basdogan, Ipek; Erturk, Alper

    2016-04-01

    Plate-like components are widely used in numerous automotive, marine, and aerospace applications where they can be employed as host structures for vibration based energy harvesting. Piezoelectric patch harvesters can be easily attached to these structures to convert the vibrational energy to the electrical energy. Power output investigations of these harvesters require accurate models for energy harvesting performance evaluation and optimization. Equivalent circuit modeling of the cantilever-based vibration energy harvesters for estimation of electrical response has been proposed in recent years. However, equivalent circuit formulation and analytical modeling of multiple piezo-patch energy harvesters integrated to thin plates including nonlinear circuits has not been studied. In this study, equivalent circuit model of multiple parallel piezoelectric patch harvesters together with a resistive load is built in electronic circuit simulation software SPICE and voltage frequency response functions (FRFs) are validated using the analytical distributedparameter model. Analytical formulation of the piezoelectric patches in parallel configuration for the DC voltage output is derived while the patches are connected to a standard AC-DC circuit. The analytic model is based on the equivalent load impedance approach for piezoelectric capacitance and AC-DC circuit elements. The analytic results are validated numerically via SPICE simulations. Finally, DC power outputs of the harvesters are computed and compared with the peak power amplitudes in the AC output case.

  19. Integration Preferences of Wildtype AAV-2 for Consensus Rep-Binding Sites at Numerous Loci in the Human Genome

    PubMed Central

    Hüser, Daniela; Gogol-Döring, Andreas; Lutter, Timo; Weger, Stefan; Winter, Kerstin; Hammer, Eva-Maria; Cathomen, Toni; Reinert, Knut; Heilbronn, Regine

    2010-01-01

    Adeno-associated virus type 2 (AAV) is known to establish latency by preferential integration in human chromosome 19q13.42. The AAV non-structural protein Rep appears to target a site called AAVS1 by simultaneously binding to Rep-binding sites (RBS) present on the AAV genome and within AAVS1. In the absence of Rep, as is the case with AAV vectors, chromosomal integration is rare and random. For a genome-wide survey of wildtype AAV integration a linker-selection-mediated (LSM)-PCR strategy was designed to retrieve AAV-chromosomal junctions. DNA sequence determination revealed wildtype AAV integration sites scattered over the entire human genome. The bioinformatic analysis of these integration sites compared to those of rep-deficient AAV vectors revealed a highly significant overrepresentation of integration events near to consensus RBS. Integration hotspots included AAVS1 with 10% of total events. Novel hotspots near consensus RBS were identified on chromosome 5p13.3 denoted AAVS2 and on chromsome 3p24.3 denoted AAVS3. AAVS2 displayed seven independent junctions clustered within only 14 bp of a consensus RBS which proved to bind Rep in vitro similar to the RBS in AAVS3. Expression of Rep in the presence of rep-deficient AAV vectors shifted targeting preferences from random integration back to the neighbourhood of consensus RBS at hotspots and numerous additional sites in the human genome. In summary, targeted AAV integration is not as specific for AAVS1 as previously assumed. Rather, Rep targets AAV to integrate into open chromatin regions in the reach of various, consensus RBS homologues in the human genome. PMID:20628575

  20. Integration of numerical modeling and observations for the Gulf of Naples monitoring network

    NASA Astrophysics Data System (ADS)

    Iermano, I.; Uttieri, M.; Zambianchi, E.; Buonocore, B.; Cianelli, D.; Falco, P.; Zambardino, G.

    2012-04-01

    Lethal effects of mineral oils on fragile marine and coastal ecosystems are now well known. Risks and damages caused by a maritime accident can be reduced with the help of better forecasts and efficient monitoring systems. The MED project TOSCA (Tracking Oil Spills and Coastal Awareness Network), which gathers 13 partners from 4 Mediterranean countries, has been designed to help create a better response system to maritime accidents. Through the construction of an observational network, based on state of the art technology (HF radars and drifters), TOSCA provides real-time observations and forecasts of the Mediterranean coastal marine environmental conditions. The system is installed and assessed in five test sites on the coastal areas of oil spill outlets (Eastern Mediterranean) and on high traffic areas (Western Mediterranean). The Gulf of Naples, a small semi-closed basin opening to the Tyrrhenian Sea is one of the five test-sites. It is of particular interest from both the environmental point of view, due to peculiar ecosystem properties in the area, and because it sustains important touristic and commercial activities. Currently the Gulf of Naples monitoring network is represented by five automatic weather stations distributed along the coasts of the Gulf, one weather radar, two tide gauges, one waverider buoy, and moored physical, chemical and bio-optical instrumentation. In addition, a CODAR-SeaSonde HF coastal radar system composed of three antennas is located in Portici, Massa Lubrense and Castellammare. The system provides hourly data of surface currents over the entire Gulf with a 1km spatial resolution. A numerical modeling implementation based on Regional Ocean Modeling System (ROMS) is actually integrated in the Gulf of Naples monitoring network. ROMS is a 3-D, free-surface, hydrostatic, primitive equation, finite difference ocean model. In our configuration, the model has high horizontal resolution (250m), and 30 sigma levels in the vertical. Thanks

  1. Numerical simulation of installation process and uplift resistance for an integrated suction foundation in deep ocean

    NASA Astrophysics Data System (ADS)

    Li, Ying; Yang, Shu-geng; Yu, Shu-ming

    2016-03-01

    A concept design, named integrated suction foundation, is proposed for a tension leg platform (TLP) in deep ocean. The most important improvement in comparing with the traditional one is that a pressure-resistant storage module is designed. It utilizes the high hydrostatic pressure in deep ocean to drive water into the module to generate negative pressure for bucket suction. This work aims to further approve the feasibility of the concept design in the aspect of penetration installation and the uplift force in-place. Seepage is generated during suction penetration, and can have both positive and negative effects on penetration process. To study the effect of seepage on the penetration process of the integrated suction foundation, finite element analysis (FEA) is carried out in this work. In particular, an improved methodology to calculate the penetration resistance is proposed for the integrated suction foundation with respect to the reduction factor of penetration resistance. The maximum allowable negative pressure during suction penetration is calculated with the critical hydraulic gradient method through FEA. The simulation results of the penetration process show that the integrated suction foundation can be installed safely. Moreover, the uplift resistance of the integrated suction foundation is calculated and the feasibility of the integrated suction foundation working on-site is verified. In all, the analysis in this work further approves the feasibility of the integrated suction foundation for TLPs in deep ocean applications.

  2. cuSwift --- a suite of numerical integration methods for modelling planetary systems implemented in C/CUDA

    NASA Astrophysics Data System (ADS)

    Hellmich, S.; Mottola, S.; Hahn, G.; Kührt, E.; Hlawitschka, M.

    2014-07-01

    Simulations of dynamical processes in planetary systems represent an important tool for studying the orbital evolution of the systems [1--3]. Using modern numerical integration methods, it is possible to model systems containing many thousands of objects over timescales of several hundred million years. However, in general, supercomputers are needed to get reasonable simulation results in acceptable execution times [3]. To exploit the ever-growing computation power of Graphics Processing Units (GPUs) in modern desktop computers, we implemented cuSwift, a library of numerical integration methods for studying long-term dynamical processes in planetary systems. cuSwift can be seen as a re-implementation of the famous SWIFT integrator package written by Hal Levison and Martin Duncan. cuSwift is written in C/CUDA and contains different integration methods for various purposes. So far, we have implemented three algorithms: a 15th-order Radau integrator [4], the Wisdom-Holman Mapping (WHM) integrator [5], and the Regularized Mixed Variable Symplectic (RMVS) Method [6]. These algorithms treat only the planets as mutually gravitationally interacting bodies whereas asteroids and comets (or other minor bodies of interest) are treated as massless test particles which are gravitationally influenced by the massive bodies but do not affect each other or the massive bodies. The main focus of this work is on the symplectic methods (WHM and RMVS) which use a larger time step and thus are capable of integrating many particles over a large time span. As an additional feature, we implemented the non-gravitational Yarkovsky effect as described by M. Brož [7]. With cuSwift, we show that the use of modern GPUs makes it possible to speed up these methods by more than one order of magnitude compared to the single-core CPU implementation, thereby enabling modest workstation computers to perform long-term dynamical simulations. We use these methods to study the influence of the Yarkovsky

  3. Mixing-to-eruption timescales: an integrated model combining numerical simulations and high-temperature experiments with natural melts

    NASA Astrophysics Data System (ADS)

    Montagna, Chiara; Perugini, Diego; De Campos, Christina; Longo, Antonella; Dingwell, Donald Bruce; Papale, Paolo

    2015-04-01

    Arrival of magma from depth into shallow reservoirs and associated mixing processes have been documented as possible triggers of explosive eruptions. Quantifying the timing from beginning of mixing to eruption is of fundamental importance in volcanology in order to put constraints about the possible onset of a new eruption. Here we integrate numerical simulations and high-temperature experiment performed with natural melts with the aim to attempt identifying the mixing-to-eruption timescales. We performed two-dimensional numerical simulations of the arrival of gas-rich magmas into shallow reservoirs. We solve the fluid dynamics for the two interacting magmas evaluating the space-time evolution of the physical properties of the mixture. Convection and mingling develop quickly into the chamber and feeding conduit/dyke. Over time scales of hours, the magmas in the reservoir appear to have mingled throughout, and convective patterns become harder to identify. High-temperature magma mixing experiments have been performed using a centrifuge and using basaltic and phonolitic melts from Campi Flegrei (Italy) as initial end-members. Concentration Variance Decay (CVD), an inevitable consequence of magma mixing, is exponential with time. The rate of CVD is a powerful new geochronometer for the time from mixing to eruption/quenching. The mingling-to-eruption time of three explosive volcanic eruptions from Campi Flegrei (Italy) yield durations on the order of tens of minutes. These results are in perfect agreement with the numerical simulations that suggest a maximum mixing time of a few hours to obtain a hybrid mixture. We show that integration of numerical simulation and high-temperature experiments can provide unprecedented results about mixing processes in volcanic systems. The combined application of numerical simulations and CVD geochronometer to the eruptive products of active volcanoes could be decisive for the preparation of hazard mitigation during volcanic unrest.

  4. ICM: an Integrated Compartment Method for numerically solving partial differential equations

    SciTech Connect

    Yeh, G.T.

    1981-05-01

    An integrated compartment method (ICM) is proposed to construct a set of algebraic equations from a system of partial differential equations. The ICM combines the utility of integral formulation of finite element approach, the simplicity of interpolation of finite difference approximation, and the flexibility of compartment analyses. The integral formulation eases the treatment of boundary conditions, in particular, the Neumann-type boundary conditions. The simplicity of interpolation provides great economy in computation. The flexibility of discretization with irregular compartments of various shapes and sizes offers advantages in resolving complex boundaries enclosing compound regions of interest. The basic procedures of ICM are first to discretize the region of interest into compartments, then to apply three integral theorems of vectors to transform the volume integral to the surface integral, and finally to use interpolation to relate the interfacial values in terms of compartment values to close the system. The Navier-Stokes equations are used as an example of how to derive the corresponding ICM alogrithm for a given set of partial differential equations. Because of the structure of the algorithm, the basic computer program remains the same for cases in one-, two-, or three-dimensional problems.

  5. Numerical simulation of small perturbation transonic flows

    NASA Technical Reports Server (NTRS)

    Seebass, A. R.; Yu, N. J.

    1976-01-01

    The results of a systematic study of small perturbation transonic flows are presented. Both the flow over thin airfoils and the flow over wedges were investigated. Various numerical schemes were employed in the study. The prime goal of the research was to determine the efficiency of various numerical procedures by accurately evaluating the wave drag, both by computing the pressure integral around the body and by integrating the momentum loss across the shock. Numerical errors involved in the computations that affect the accuracy of drag evaluations were analyzed. The factors that effect numerical stability and the rate of convergence of the iterative schemes were also systematically studied.

  6. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; Beerli, Peter; Zeng, Xiankui; Lu, Dan; Tao, Yuezan

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  7. Physical and mathematical justification of the numerical Brillouin zone integration of the Boltzmann rate equation by Gaussian smearing

    NASA Astrophysics Data System (ADS)

    Illg, Christian; Haag, Michael; Teeny, Nicolas; Wirth, Jens; Fähnle, Manfred

    2016-03-01

    Scatterings of electrons at quasiparticles or photons are very important for many topics in solid-state physics, e.g., spintronics, magnonics or photonics, and therefore a correct numerical treatment of these scatterings is very important. For a quantum-mechanical description of these scatterings, Fermi's golden rule is used to calculate the transition rate from an initial state to a final state in a first-order time-dependent perturbation theory. One can calculate the total transition rate from all initial states to all final states with Boltzmann rate equations involving Brillouin zone integrations. The numerical treatment of these integrations on a finite grid is often done via a replacement of the Dirac delta distribution by a Gaussian. The Dirac delta distribution appears in Fermi's golden rule where it describes the energy conservation among the interacting particles. Since the Dirac delta distribution is a not a function it is not clear from a mathematical point of view that this procedure is justified. We show with physical and mathematical arguments that this numerical procedure is in general correct, and we comment on critical points.

  8. Integration of artificial intelligence and numerical optimization techniques for the design of complex aerospace systems

    SciTech Connect

    Tong, S.S.; Powell, D.; Goel, S. GE Consulting Services, Albany, NY )

    1992-02-01

    A new software system called Engineous combines artificial intelligence and numerical methods for the design and optimization of complex aerospace systems. Engineous combines the advanced computational techniques of genetic algorithms, expert systems, and object-oriented programming with the conventional methods of numerical optimization and simulated annealing to create a design optimization environment that can be applied to computational models in various disciplines. Engineous has produced designs with higher predicted performance gains that current manual design processes - on average a 10-to-1 reduction of turnaround time - and has yielded new insights into product design. It has been applied to the aerodynamic preliminary design of an aircraft engine turbine, concurrent aerodynamic and mechanical preliminary design of an aircraft engine turbine blade and disk, a space superconductor generator, a satellite power converter, and a nuclear-powered satellite reactor and shield. 23 refs.

  9. Investigation of the numerics of point spread function integration in single molecule localization.

    PubMed

    Chao, Jerry; Ram, Sripad; Lee, Taiyoon; Ward, E Sally; Ober, Raimund J

    2015-06-29

    The computation of point spread functions, which are typically used to model the image profile of a single molecule, represents a central task in the analysis of single molecule microscopy data. To determine how the accuracy of the computation affects how well a single molecule can be localized, we investigate how the fineness with which the point spread function is integrated over an image pixel impacts the performance of the maximum likelihood location estimator. We consider both the Airy and the two-dimensional Gaussian point spread functions. Our results show that the point spread function needs to be adequately integrated over a pixel to ensure that the estimator closely recovers the true location of the single molecule with an accuracy that is comparable to the best possible accuracy as determined using the Fisher information formalism. Importantly, if integration with an insufficiently fine step size is carried out, the resulting estimates can be significantly different from the true location, particularly when the image data is acquired at relatively low magnifications. We also present a methodology for determining an adequate step size for integrating the point spread function. PMID:26191698

  10. The new numerically-analytic method for integrating the multiscale thermo elastoviscoplasticity equations with internal variables

    NASA Astrophysics Data System (ADS)

    Kukudzhanov, V.

    2009-08-01

    Integration of the constitutive equations of ductile fracture models is analyzed in this paper. The splitting method is applied to the Gurson's and Kukudzhanov's models. The analysis of validity of this method is done. It was shown that Kukudzhanov's model describes a large variety of materials since it involves residual stress and viscosity.

  11. A stochastic regulator for integrated communication and control systems. I - Formulation of control law. II - Numerical analysis and simulation

    NASA Technical Reports Server (NTRS)

    Liou, Luen-Woei; Ray, Asok

    1991-01-01

    A state feedback control law for integrated communication and control systems (ICCS) is formulated by using the dynamic programming and optimality principle on a finite-time horizon. The control law is derived on the basis of a stochastic model of the plant which is augmented in state space to allow for the effects of randomly varying delays in the feedback loop. A numerical procedure for synthesizing the control parameters is then presented, and the performance of the control law is evaluated by simulating the flight dynamics model of an advanced aircraft. Finally, recommendations for future work are made.

  12. Integrating Laboratory and Numerical Decompression Experiments to Investigate Fluid Dynamics into the Conduit

    NASA Astrophysics Data System (ADS)

    Spina, Laura; Colucci, Simone; De'Michieli Vitturi, Mattia; Scheu, Bettina; Dingwell, Donald Bruce

    2015-04-01

    The study of the fluid dynamics of magmatic melts into the conduit, where direct observations are unattainable, was proven to be strongly enhanced by multiparametric approaches. Among them, the coupling of numerical modeling with laboratory experiments represents a fundamental tool of investigation. Indeed, the experimental approach provide invaluable data to validate complex multiphase codes. We performed decompression experiments in a shock tube system, using pure silicon oil as a proxy for the basaltic melt. A range of viscosity comprised between 1 and 1000 Pa s was investigated. The samples were saturated with Argon for 72h at 10MPa, before being slowly decompressed to atmospheric pressure. The evolution of the analogue magmatic system was monitored through a high speed camera and pressure sensors, located into the analogue conduit. The experimental decompressions have then been reproduced numerically using a multiphase solver based on OpenFOAM framework. The original compressible multiphase Openfoam solver twoPhaseEulerFoam was extended to take into account the multicomponent nature of the fluid mixtures (liquid and gas) and the phase transition. According to the experimental conditions, the simulations were run with values of fluid viscosity ranging from 1 to 1000 Pa s. The sensitivity of the model has been tested for different values of the parameters t and D, representing respectively the relaxation time for gas exsolution and the average bubble diameter, required by the Gidaspow drag model. Valuable range of values for both parameters are provided from experimental observations, i.e. bubble nucleation time and bubble size distribution at a given pressure. The comparison of video images with the outcomes of the numerical models was performed by tracking the evolution of the gas volume fraction through time. Therefore, we were able to calibrate the parameter of the model by laboratory results, and to track the fluid dynamics of experimental decompression.

  13. Numerical integration of nearly-Hamiltonian systems. [Van der Pol oscillator and perturbed Keplerian motion

    NASA Technical Reports Server (NTRS)

    Bond, V. R.

    1978-01-01

    The reported investigation is concerned with the solution of systems of differential equations which are derived from a Hamiltonian function in the extended phase space. The problem selected involves a one-dimensional perturbed harmonic oscillator. The van der Pol equation considered has an exact asymptotic value for its amplitude. Comparisons are made between a numerical solution and a known analytical solution. In addition to the van der Pol problem, known solutions regarding the restricted problem of three bodies are used as examples for perturbed Keplerian motion. The extended phase space Hamiltonian discussed by Stiefel and Scheifele (1971) is considered. A description is presented of two canonical formulations of the perturbed harmonic oscillator.

  14. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems

    PubMed Central

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-01-01

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted “useful” data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency. PMID:26569247

  15. Gocad2OGS: Workflow to Integrate Geo-structural Information into Numerical Simulation Models

    NASA Astrophysics Data System (ADS)

    Fischer, Thomas; Walther, Marc; Naumov, Dmitri; Sattler, Sabine; Kolditz, Olaf

    2015-04-01

    The investigation of fluid circulation in the Thuringian syncline is one of the INFLUINS project's targets. A 3D geo-structural model including 12 stratigraphic layers and 54 fault zones is created by geologists in the first step using the Gocad software. Within the INFLUINS project a ground-water flow simulation is used to check existing hypotheses and to gain new ideas of the underground fluid flow behaviour. We used the scientific, platform independent, open source software OpenGeoSys that implements the finite element method to solve the governing equations describing fluid flow in porous media. The geo-structural Gocad model is not suitable for the FEM numerical analysis. Therefore it is converted into an unstructured grid satisfying all mesh quality criteria required for the ground-water flow simulation. The resulting grid is stored in an open data format given by the Visualization Toolkit (vtk). In this work we present a workflow to convert geological structural models, created using the Gocad software, into a simulation model that is easy to use from numerical simulation software. We tested our workflow with the 3D geo-structural model of the Thuringian syncline and were able to setup and to evaluate a hydrogeological simulation model successfully.

  16. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    PubMed

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-11-11

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.

  17. On the formulation, parameter identification and numerical integration of the EMMI model :plasticity and isotropic damage.

    SciTech Connect

    Bammann, Douglas J.; Johnson, G. C. (University of California, Berkeley, CA); Marin, Esteban B.; Regueiro, Richard A.

    2006-01-01

    In this report we present the formulation of the physically-based Evolving Microstructural Model of Inelasticity (EMMI) . The specific version of the model treated here describes the plasticity and isotropic damage of metals as being currently applied to model the ductile failure process in structural components of the W80 program . The formulation of the EMMI constitutive equations is framed in the context of the large deformation kinematics of solids and the thermodynamics of internal state variables . This formulation is focused first on developing the plasticity equations in both the relaxed (unloaded) and current configurations. The equations in the current configuration, expressed in non-dimensional form, are used to devise the identification procedure for the plasticity parameters. The model is then extended to include a porosity-based isotropic damage state variable to describe the progressive deterioration of the strength and mechanical properties of metals induced by deformation . The numerical treatment of these coupled plasticity-damage constitutive equations is explained in detail. A number of examples are solved to validate the numerical implementation of the model.

  18. Accurate Prediction of Hyperfine Coupling Constants in Muoniated and Hydrogenated Ethyl Radicals: Ab Initio Path Integral Simulation Study with Density Functional Theory Method.

    PubMed

    Yamada, Kenta; Kawashima, Yukio; Tachikawa, Masanori

    2014-05-13

    We performed ab initio path integral molecular dynamics (PIMD) simulations with a density functional theory (DFT) method to accurately predict hyperfine coupling constants (HFCCs) in the ethyl radical (CβH3-CαH2) and its Mu-substituted (muoniated) compound (CβH2Mu-CαH2). The substitution of a Mu atom, an ultralight isotope of the H atom, with larger nuclear quantum effect is expected to strongly affect the nature of the ethyl radical. The static conventional DFT calculations of CβH3-CαH2 find that the elongation of one Cβ-H bond causes a change in the shape of potential energy curve along the rotational angle via the imbalance of attractive and repulsive interactions between the methyl and methylene groups. Investigation of the methyl-group behavior including the nuclear quantum and thermal effects shows that an unbalanced CβH2Mu group with the elongated Cβ-Mu bond rotates around the Cβ-Cα bond in a muoniated ethyl radical, quite differently from the CβH3 group with the three equivalent Cβ-H bonds in the ethyl radical. These rotations couple with other molecular motions such as the methylene-group rocking motion (inversion), leading to difficulties in reproducing the corresponding barrier heights. Our PIMD simulations successfully predict the barrier heights to be close to the experimental values and provide a significant improvement in muon and proton HFCCs given by the static conventional DFT method. Further investigation reveals that the Cβ-Mu/H stretching motion, methyl-group rotation, methylene-group rocking motion, and HFCC values deeply intertwine with each other. Because these motions are different between the radicals, a proper description of the structural fluctuations reflecting the nuclear quantum and thermal effects is vital to evaluate HFCC values in theory to be comparable to the experimental ones. Accordingly, a fundamental difference in HFCC between the radicals arises from their intrinsic molecular motions at a finite temperature, in

  19. Model coupling methodology for thermo-hydro-mechanical-chemical numerical simulations in integrated assessment of long-term site behaviour

    NASA Astrophysics Data System (ADS)

    Kempka, Thomas; De Lucia, Marco; Kühn, Michael

    2015-04-01

    The integrated assessment of long-term site behaviour taking into account a high spatial resolution at reservoir scale requires a sophisticated methodology to represent coupled thermal, hydraulic, mechanical and chemical processes of relevance. Our coupling methodology considers the time-dependent occurrence and significance of multi-phase flow processes, mechanical effects and geochemical reactions (Kempka et al., 2014). Hereby, a simplified hydro-chemical coupling procedure was developed (Klein et al., 2013) and validated against fully coupled hydro-chemical simulations (De Lucia et al., 2015). The numerical simulation results elaborated for the pilot site Ketzin demonstrate that mechanical reservoir, caprock and fault integrity are maintained during the time of operation and that after 10,000 years CO2 dissolution is the dominating trapping mechanism and mineralization occurs on the order of 10 % to 25 % with negligible changes to porosity and permeability. De Lucia, M., Kempka, T., Kühn, M. A coupling alternative to reactive transport simulations for long-term prediction of chemical reactions in heterogeneous CO2 storage systems (2014) Geosci Model Dev Discuss 7:6217-6261. doi:10.5194/gmdd-7-6217-2014. Kempka, T., De Lucia, M., Kühn, M. Geomechanical integrity verification and mineral trapping quantification for the Ketzin CO2 storage pilot site by coupled numerical simulations (2014) Energy Procedia 63:3330-3338, doi:10.1016/j.egypro.2014.11.361. Klein E, De Lucia M, Kempka T, Kühn M. Evaluation of longterm mineral trapping at the Ketzin pilot site for CO2 storage: an integrative approach using geo-chemical modelling and reservoir simulation. Int J Greenh Gas Con 2013; 19:720-730. doi:10.1016/j.ijggc.2013.05.014.

  20. 3D models of slow motions in the Earth's crust and upper mantle in the source zones of seismically active regions and their comparison with highly accurate observational data: II. Results of numerical calculations

    NASA Astrophysics Data System (ADS)

    Molodenskii, S. M.; Molodenskii, M. S.; Begitova, T. A.

    2016-09-01

    In the first part of the paper, a new method was developed for solving the inverse problem of coseismic and postseismic deformations in the real (imperfectly elastic, radially and horizontally heterogeneous, self-gravitating) Earth with hydrostatic initial stresses from highly accurate modern satellite data. The method is based on the decomposition of the sought parameters in the orthogonalized basis. The method was suggested for estimating the ambiguity of the solution of the inverse problem for coseismic and postseismic deformations. For obtaining this estimate, the orthogonal complement is constructed to the n-dimensional space spanned by the system of functional derivatives of the residuals in the system of n observed and model data on the coseismic and postseismic displacements at a variety of sites on the ground surface with small variations in the models. Below, we present the results of the numerical modeling of the elastic displacements of the ground surface, which were based on calculating Green's functions of the real Earth for the plane dislocation surface and different orientations of the displacement vector as described in part I of the paper. The calculations were conducted for the model of a horizontally homogeneous but radially heterogeneous selfgravitating Earth with hydrostatic initial stresses and the mantle rheology described by the Lomnitz logarithmic creep function according to (M. Molodenskii, 2014). We compare our results with the previous numerical calculations (Okado, 1985; 1992) for the simplest model of a perfectly elastic nongravitating homogeneous Earth. It is shown that with the source depths starting from the first hundreds of kilometers and with magnitudes of about 8.0 and higher, the discrepancies significantly exceed the errors of the observations and should therefore be taken into account. We present the examples of the numerical calculations of the creep function of the crust and upper mantle for the coseismic deformations. We

  1. golem95: A numerical program to calculate one-loop tensor integrals with up to six external legs

    NASA Astrophysics Data System (ADS)

    Binoth, T.; Guillet, J.-Ph.; Heinrich, G.; Pilon, E.; Reiter, T.

    2009-11-01

    We present a program for the numerical evaluation of form factors entering the calculation of one-loop amplitudes with up to six external legs. The program is written in Fortran95 and performs the reduction to a certain set of basis integrals numerically, using a formalism where inverse Gram determinants can be avoided. It can be used to calculate one-loop amplitudes with massless internal particles in a fast and numerically stable way. Catalogue identifier: AEEO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 50 105 No. of bytes in distributed program, including test data, etc.: 241 657 Distribution format: tar.gz Programming language: Fortran95 Computer: Any computer with a Fortran95 compiler Operating system: Linux, Unix RAM: RAM used per form factor is insignificant, even for a rank six six-point form factor Classification: 4.4, 11.1 External routines: Perl programming language (http://www.perl.com/) Nature of problem: Evaluation of one-loop multi-leg tensor integrals occurring in the calculation of next-to-leading order corrections to scattering amplitudes in elementary particle physics. Solution method: Tensor integrals are represented in terms of form factors and a set of basic building blocks ("basis integrals"). The reduction to the basis integrals is

  2. Efficient O(N) integration for all-electron electronic structure calculation using numeric basis functions

    SciTech Connect

    Havu, V. Blum, V.; Havu, P.; Scheffler, M.

    2009-12-01

    We consider the problem of developing O(N) scaling grid-based operations needed in many central operations when performing electronic structure calculations with numeric atom-centered orbitals as basis functions. We outline the overall formulation of localized algorithms, and specifically the creation of localized grid batches. The choice of the grid partitioning scheme plays an important role in the performance and memory consumption of the grid-based operations. Three different top-down partitioning methods are investigated, and compared with formally more rigorous yet much more expensive bottom-up algorithms. We show that a conceptually simple top-down grid partitioning scheme achieves essentially the same efficiency as the more rigorous bottom-up approaches.

  3. Long-Time Numerical Integration of the Three-Dimensional Wave Equation in the Vicinity of a Moving Source

    NASA Technical Reports Server (NTRS)

    Ryabenkii, V. S.; Turchaninov, V. I.; Tsynkov, S. V.

    1999-01-01

    We propose a family of algorithms for solving numerically a Cauchy problem for the three-dimensional wave equation. The sources that drive the equation (i.e., the right-hand side) are compactly supported in space for any given time; they, however, may actually move in space with a subsonic speed. The solution is calculated inside a finite domain (e.g., sphere) that also moves with a subsonic speed and always contains the support of the right-hand side. The algorithms employ a standard consistent and stable explicit finite-difference scheme for the wave equation. They allow one to calculate tile solution for arbitrarily long time intervals without error accumulation and with the fixed non-growing amount of tile CPU time and memory required for advancing one time step. The algorithms are inherently three-dimensional; they rely on the presence of lacunae in the solutions of the wave equation in oddly dimensional spaces. The methodology presented in the paper is, in fact, a building block for constructing the nonlocal highly accurate unsteady artificial boundary conditions to be used for the numerical simulation of waves propagating with finite speed over unbounded domains.

  4. Numerical comparison of spectral properties of volume-integral-equation formulations

    NASA Astrophysics Data System (ADS)

    Markkanen, Johannes; Ylä-Oijala, Pasi

    2016-07-01

    We study and compare spectral properties of various volume-integral-equation formulations. The equations are written for the electric flux, current, field, and potentials, and discretized with basis functions spanning the appropriate function spaces. Each formulation leads to eigenvalue distributions of different kind due to the effects of discretization procedure, namely, the choice of basis and testing functions. The discrete spectrum of the potential formulation reproduces the theoretically predicted spectrum almost exactly while the spectra of other formulations deviate from the ideal one. It is shown that the potential formulation has the spectral properties desired from the preconditioning perspective.

  5. Elementary Techniques of Numerical Integration and Their Computer Implementation. Applications of Elementary Calculus to Computer Science. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Unit 379.

    ERIC Educational Resources Information Center

    Motter, Wendell L.

    It is noted that there are some integrals which cannot be evaluated by determining an antiderivative, and these integrals must be subjected to other techniques. Numerical integration is one such method; it provides a sum that is an approximate value for some integral types. This module's purpose is to introduce methods of numerical integration and…

  6. Numerical methods for the simulation of complex multi-body flows with applications for the integrated Space Shuttle vehicle

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    1992-01-01

    This project forms part of the long term computational effort to simulate the time dependent flow over the integrated Space Shuttle vehicle (orbiter, solid rocket boosters (SRB's), external tank (ET), and attach hardware) during its ascent mode for various nominal and abort flight conditions. Due to the limitations of experimental data such as wind tunnel wall effects and the difficulty of safely obtaining valid flight data, numerical simulations are undertaken to supplement the existing data base. This data can then be used to predict the aerodynamic behavior over a wide range of flight conditions. Existing computational results show relatively good overall comparison with experiments but further refinement is required to reduce numerical errors and to obtain finer agreements over a larger parameter space. One of the important goals of this project is to obtain better comparisons between numerical simulations and experiments. In the simulations performed so far, the geometry has been simplified in various ways to reduce the complexity so that useful results can be obtained in a reasonable time frame due to limitations in computer resources. In this project, the finer details of the major components of the Space Shuttle are modeled better by including more complexity in the geometry definition. Smaller components not included in early Space Shuttle simulations will now be modeled and gridded.

  7. Near-field dispersion of produced formation water (PFW) in the Adriatic Sea: an integrated numerical-chemical approach.

    PubMed

    Cianelli, D; Manfra, L; Zambianchi, E; Maggi, C; Cappiello, A; Famiglini, G; Mannozzi, M; Cicero, A M

    2008-05-01

    Produced formation waters (PFWs), a by-product of both oil and gas extraction, are separated from hydrocarbons onboard oil platforms and then discharged into the sea through submarine outfalls. The dispersion of PFWs into the environment may have a potential impact on marine ecosystems. We reproduce the initial PFW-seawater mixing process by means of the UM3 model applied to offshore natural gas platforms currently active in the Northern Adriatic Sea (Mediterranean Sea). Chemical analyses lead to the identification of a chemical tracer (diethylene glycol) which enables us to follow the fate of PFWs into receiving waters. The numerical simulations are realized in different seasonal conditions using both measured oceanographic data and tracer concentrations. The numerical results show the spatial and temporal plume development in different stratification and ambient current conditions. The analytical approach measures concentrations of the diethylene glycol at a maximum sampling distance of 25 m. The results show a good agreement between field observations and model predictions in the near-field area. The integration of numerical results with chemical analyses also provides new insight to plan and optimize PFW monitoring and discharge.

  8. Computational and numerical aspects of using the integral equation method for adhesive layer fracture mechanics analysis

    SciTech Connect

    Giurgiutiu, V.; Ionita, A.; Dillard, D.A.; Graffeo, J.K.

    1996-12-31

    Fracture mechanics analysis of adhesively bonded joints has attracted considerable attention in recent years. A possible approach to the analysis of adhesive layer cracks is to study a brittle adhesive between 2 elastic half-planes representing the substrates. A 2-material 3-region elasticity problem is set up and has to be solved. A modeling technique based on the work of Fleck, Hutchinson, and Suo is used. Two complex potential problems using Muskelishvili`s formulation are set up for the 3-region, 2-material model: (a) a distribution of edge dislocations is employed to simulate the crack and its near field; and (b) a crack-free problem is used to simulate the effect of the external loading applied in the far field. Superposition of the two problems is followed by matching tractions and displacements at the bimaterial boundaries. The Cauchy principal value integral is used to treat the singularities. Imposing the traction-free boundary conditions over the entire crack length yielded a linear system of two integral equations. The parameters of the problem are Dundurs` elastic mismatch coefficients, {alpha} and {beta}, and the ratio c/H representing the geometric position of the crack in the adhesive layer.

  9. Integrating experimental and numerical methods for a scenario-based quantitative assessment of subsurface energy storage options

    NASA Astrophysics Data System (ADS)

    Kabuth, Alina; Dahmke, Andreas; Hagrey, Said Attia al; Berta, Márton; Dörr, Cordula; Koproch, Nicolas; Köber, Ralf; Köhn, Daniel; Nolde, Michael; Tilmann Pfeiffer, Wolf; Popp, Steffi; Schwanebeck, Malte; Bauer, Sebastian

    2016-04-01

    Within the framework of the transition to renewable energy sources ("Energiewende"), the German government defined the target of producing 60 % of the final energy consumption from renewable energy sources by the year 2050. However, renewable energies are subject to natural fluctuations. Energy storage can help to buffer the resulting time shifts between production and demand. Subsurface geological structures provide large potential capacities for energy stored in the form of heat or gas on daily to seasonal time scales. In order to explore this potential sustainably, the possible induced effects of energy storage operations have to be quantified for both specified normal operation and events of failure. The ANGUS+ project therefore integrates experimental laboratory studies with numerical approaches to assess subsurface energy storage scenarios and monitoring methods. Subsurface storage options for gas, i.e. hydrogen, synthetic methane and compressed air in salt caverns or porous structures, as well as subsurface heat storage are investigated with respect to site prerequisites, storage dimensions, induced effects, monitoring methods and integration into spatial planning schemes. The conceptual interdisciplinary approach of the ANGUS+ project towards the integration of subsurface energy storage into a sustainable subsurface planning scheme is presented here, and this approach is then demonstrated using the examples of two selected energy storage options: Firstly, the option of seasonal heat storage in a shallow aquifer is presented. Coupled thermal and hydraulic processes induced by periodic heat injection and extraction were simulated in the open-source numerical modelling package OpenGeoSys. Situations of specified normal operation as well as cases of failure in operational storage with leaking heat transfer fluid are considered. Bench-scale experiments provided parameterisations of temperature dependent changes in shallow groundwater hydrogeochemistry. As a

  10. Mosaic-skeleton method as applied to the numerical solution of three-dimensional Dirichlet problems for the Helmholtz equation in integral form

    NASA Astrophysics Data System (ADS)

    Kashirin, A. A.; Smagin, S. I.; Taltykina, M. Yu.

    2016-04-01

    Interior and exterior three-dimensional Dirichlet problems for the Helmholtz equation are solved numerically. They are formulated as equivalent boundary Fredholm integral equations of the first kind and are approximated by systems of linear algebraic equations, which are then solved numerically by applying an iteration method. The mosaic-skeleton method is used to speed up the solution procedure.

  11. Numerical Modeling for Integrated Design of a DNAPL Partitioning Tracer Test

    NASA Astrophysics Data System (ADS)

    McCray, J. E.; Divine, C. E.; Dugan, P. J.; Wolf, L.; Boving, T.; Louth, M.; Brusseau, M. L.; Hayes, D.

    2002-12-01

    Partitioning tracer tests (PTTs) are commonly used to estimate the location and volume of nonaqueous-phase liquids (NAPLs) at contaminated groundwater sites. PTTs are completed before and after remediation efforts as one means to assess remediation effectiveness. PTT design is complex. Numerical models are invaluable tools for designing a PTT, particularly for designing flow rates and selecting tracers to ensure proper tracer breakthrough times, spatial design of injection-extraction wells and rates to maximize tracer capture, well-specific sampling density and frequency, and appropriate tracer-chemical masses. Generally, the design requires consideration of the following factors: type of contaminant; distribution of contaminant at the site, including location of hot spots; site hydraulic characteristics; measurement of the partitioning coefficients for the various tracers; the time allotted to conduct the PTT; evaluation of the magnitude and arrival time of the tracer breakthrough curves; duration of the tracer input pulse; maximum tracer concentrations; analytical detection limits for the tracers; estimation of the capture zone of the well field to tracer ensure mass balance and to limit residual tracer concentrations left in the subsurface; effect of chemical remediation agents on the PTT results, and disposal of the extracted tracer solution. These design principles are applied to a chemical-enhanced remediation effort for a chlorinated-solvent dense NAPL (DNAPL) site at Little Creek Naval Amphibious Base in Virginia Beach, Virginia. For this project, the hydrology and pre-PTT contaminant distribution were characterized using traditional methods (slug tests, groundwater and soil concentrations from monitoring wells, and geoprobe analysis), as well as membrane interface probe analysis. Additional wells were installed after these studies. Partitioning tracers were selected based on the primary DNAPL contaminants at the site, expected NAPL saturations

  12. Study of vortex ring dynamics in the nonlinear Schrodinger equation utilizing GPU-accelerated high-order compact numerical integrators

    NASA Astrophysics Data System (ADS)

    Caplan, Ronald Meyer

    We numerically study the dynamics and interactions of vortex rings in the nonlinear Schrodinger equation (NLSE). Single ring dynamics for both bright and dark vortex rings are explored including their traverse velocity, stability, and perturbations resulting in quadrupole oscillations. Multi-ring dynamics of dark vortex rings are investigated, including scattering and merging of two colliding rings, leapfrogging interactions of co-traveling rings, as well as co-moving steady-state multi-ring ensembles. Simulations of choreographed multi-ring setups are also performed, leading to intriguing interaction dynamics. Due to the inherent lack of a close form solution for vortex rings and the dimensionality where they live, efficient numerical methods to integrate the NLSE have to be developed in order to perform the extensive number of required simulations. To facilitate this, compact high-order numerical schemes for the spatial derivatives are developed which include a new semi-compact modulus-squared Dirichlet boundary condition. The schemes are combined with a fourth-order Runge-Kutta time-stepping scheme in order to keep the overall method fully explicit. To ensure efficient use of the schemes, a stability analysis is performed to find bounds on the largest usable time step-size as a function of the spatial step-size. The numerical methods are implemented into codes which are run on NVIDIA graphic processing unit (GPU) parallel architectures. The codes running on the GPU are shown to be many times faster than their serial counterparts. The codes are developed with future usability in mind, and therefore are written to interface with MATLAB utilizing custom GPU-enabled C codes with a MEX-compiler interface. Reproducibility of results is achieved by combining the codes into a code package called NLSEmagic which is freely distributed on a dedicated website.

  13. Accurate stress resultants equations for laminated composite deep thick shells

    SciTech Connect

    Qatu, M.S.

    1995-11-01

    This paper derives accurate equations for the normal and shear force as well as bending and twisting moment resultants for laminated composite deep, thick shells. The stress resultant equations for laminated composite thick shells are shown to be different from those of plates. This is due to the fact the stresses over the thickness of the shell have to be integrated on a trapezoidal-like shell element to obtain the stress resultants. Numerical results are obtained and showed that accurate stress resultants are needed for laminated composite deep thick shells, especially if the curvature is not spherical.

  14. Integrated analysis of millisecond laser irradiation of steel by comprehensive optical diagnostics and numerical simulation

    NASA Astrophysics Data System (ADS)

    Doubenskaia, M.; Smurov, I.; Nagulin, K. Yu.

    2016-04-01

    Complimentary optical diagnostic tools are applied to provide comprehensive analysis of thermal phenomena in millisecond Nd:YAG laser irradiation of steel substrates. The following optical devices are employed: (a) infrared camera FLIR Phoenix RDASTM equipped by InSb sensor with 3 to 5 µm band pass arranged on 320 × 256 pixels array, (b) ultra-rapid camera Phantom V7.1 with SR-CMOS monochrome sensor in the visible spectral range, up to 105 frames per second for 64 × 88 pixels array, (c) original multi-wavelength pyrometer in the near-infrared range (1.370-1.531 µm). The following laser radiation parameters are applied: variation of energy per pulse in the range 15-30 J at a constant pulse duration of 10 ms with and without application of protective gas (Ar). The evolution of true temperature is restored based on the method of multi-colour pyrometry; by this way, melting/solidification dynamics is analysed. Emissivity variation with temperature is studied, and hysteresis type functional dependence is found. Variation of intensity of surface evaporation visualised by the camera Phantom V7.1 is registered and linked with the surface temperature evolution, different surface roughness and influence of protective gas atmosphere. Determination of the vapour plume temperature based on relatively intensities of spectral lines is done. The numerical simulation is carried out applying the thermal model with phase transitions taken into account.

  15. Radial 32P ion implantation using a coaxial plasma reactor: Activity imaging and numerical integration

    NASA Astrophysics Data System (ADS)

    Fortin, M. A.; Dufresne, V.; Paynter, R.; Sarkissian, A.; Stansfield, B.

    2004-12-01

    Beta-emitting biomedical implants are currently employed in angioplasty, in the treatment of certain types of cancers, and in the embolization of aneurysms with platinum coils. Radioisotopes such as 32P can be implanted using plasma-based ion implantation (PBII). In this article, we describe a reactor that was developed to implant radioisotopes into cylindrical metallic objects. The plasma first ionizes radioisotopes sputtered from a target, and then acts as the source of particles to be implanted into the biased biomedical device. The plasma therefore plays a major role in the ionization/implantation process. Following a sequence of implantation tests, the liners protecting the interior walls of the reactor were changed and the radioactivity on them measured. This study demonstrates that the radioactive deposits on these protective liners, adequately imaged by radiography, can indicate the distribution of the radioisotopes that are not implanted. The resulting maps give unique information about the activity distribution, which is influenced by the sputtering of the 32P-containing fragments, their ionization in the plasma, and also by the subsequent ion transport mechanisms. Such information can be interpreted and used to significantly improve the efficiency of the implantation procedure. Using a surface barrier detector, a comparative study established a relationship between the gray scale of radiographs of the liners, and activity measurements. An integration process allows the quantification of the activities on the walls and components of the reactor. Finally, the resulting integral of the 32P activity is correlated to the sum of the radioactivity amounts that were sputtered from radioactive targets inside the implanter before the dismantling procedure. This balance addresses the issue of security regarding PBII technology and confirms the confinement of the radioactivity inside the chamber.

  16. Radial {sup 32}P ion implantation using a coaxial plasma reactor: Activity imaging and numerical integration

    SciTech Connect

    Fortin, M.A.; Dufresne, V.; Paynter, R.; Sarkissian, A.; Stansfield, B.

    2004-12-01

    Beta-emitting biomedical implants are currently employed in angioplasty, in the treatment of certain types of cancers, and in the embolization of aneurysms with platinum coils. Radioisotopes such as {sup 32}P can be implanted using plasma-based ion implantation (PBII). In this article, we describe a reactor that was developed to implant radioisotopes into cylindrical metallic objects. The plasma first ionizes radioisotopes sputtered from a target, and then acts as the source of particles to be implanted into the biased biomedical device. The plasma therefore plays a major role in the ionization/implantation process. Following a sequence of implantation tests, the liners protecting the interior walls of the reactor were changed and the radioactivity on them measured. This study demonstrates that the radioactive deposits on these protective liners, adequately imaged by radiography, can indicate the distribution of the radioisotopes that are not implanted. The resulting maps give unique information about the activity distribution, which is influenced by the sputtering of the {sup 32}P-containing fragments, their ionization in the plasma, and also by the subsequent ion transport mechanisms. Such information can be interpreted and used to significantly improve the efficiency of the implantation procedure. Using a surface barrier detector, a comparative study established a relationship between the gray scale of radiographs of the liners, and activity measurements. An integration process allows the quantification of the activities on the walls and components of the reactor. Finally, the resulting integral of the {sup 32}P activity is correlated to the sum of the radioactivity amounts that were sputtered from radioactive targets inside the implanter before the dismantling procedure. This balance addresses the issue of security regarding PBII technology and confirms the confinement of the radioactivity inside the chamber.

  17. Evaluation of Injection Efficiency of Carbon Dioxide Using an Integrated Injection Well and Geologic Formation Numerical Simulation Scheme

    NASA Astrophysics Data System (ADS)

    Kihm, J.; Park, S.; Kim, J.; SNU CO2 GEO-SEQ TEAM

    2011-12-01

    A series of integrated injection well and geologic formation numerical simulations was performed to evaluate the injection efficiency of carbon dioxide using a multiphase thermo-hydrological numerical model. The numerical simulation results show that groundwater flow, carbon dioxide flow, and heat transport in both injection well and sandstone formation can be simultaneously analyzed, and thus the injection efficiency (i.e., injection rate and injectivity) of carbon dioxide can be quantitatively evaluated using the integrated injection well and geologic formation numerical simulation scheme. The injection rate and injectivity of carbon dioxide increase rapidly during the early period of time (about 10 days) and then increase slightly up to about 2.07 kg/s (equivalent to 0.065 Mton/year) and about 2.84 × 10-7 kg/s/Pa, respectively, until 10 years for the base case. The sensitivity test results show that the injection pressure and temperature of carbon dioxide at the wellhead have significant impacts on its injection rate and injectivity. The vertical profile of the fluid pressure in the injection well becomes almost a hydrostatical equilibrium state within 1 month for all the cases. The vertical profile of the fluid temperature in the injection well becomes a monotonously increasing profile with the depth due to isenthalpic or adiabatic compression within 6 months for all the cases. The injection rate of carbon dioxide increases linearly with the fluid pressure difference between the well bottom and the sandstone formation far from the injection well. In contrast, the injectivity of carbon dioxide varies unsystematically with the fluid pressure difference. On the other hand, the reciprocal of the kinematic viscosity of carbon dioxide at the well bottom has an excellent linear relationship with the injectivity of carbon dioxide. It indicates that the above-mentioned variation of the injectivity of carbon dioxide can be corrected using this linear relationship. The

  18. A robust and accurate formulation of molecular and colloidal electrostatics

    NASA Astrophysics Data System (ADS)

    Sun, Qiang; Klaseboer, Evert; Chan, Derek Y. C.

    2016-08-01

    This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics.

  19. A robust and accurate formulation of molecular and colloidal electrostatics.

    PubMed

    Sun, Qiang; Klaseboer, Evert; Chan, Derek Y C

    2016-08-01

    This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics. PMID:27497538

  20. A numerical model of continental topographic evolution integrating thin sheet tectonics, river transport, and climate

    NASA Astrophysics Data System (ADS)

    Garcia-Castellanos, D.; Jimenez-Munt, I.

    2013-12-01

    How much does the erosion and sedimentation at the crust's surface influence on the patterns and distribution of tectonic deformation? This question has been mostly addressed from a numerical modelling perspective, at scales ranging from local to orogenic. Here we present a model that aims at constraining this phenomenon at the continental scale. With this purpose, we couple a thin-sheet viscous model of continental deformation with a stream-power surface transport model. The model also incorporates flexural isostatic compensation that permits the formation of large sedimentary foreland basins and a precipitation model that reproduces basic climatic effects such as continentality and orographic rainfall and rain shadow. We quantify the feedbacks between these 4 processes in a synthetic scenario inspired by the India-Asia collision. The model reproduces first-order characteristics of the growth of the Tibetan Plateau as a result of the Indian indentation. A large intramountain basin (comparable to the Tarim Basin) develops when predefining a hard inherited area in the undeformed foreland (Asia). The amount of sediment trapped in it is very sensitive to climatic parameters, particularly to evaporation, because it crucially determines its endorheic/exorheic drainage. We identify some degree of feedback between the deep and the surface processes occurs, leading locally to a <20% increase in deformation rates if orographic precipitation is account for (relative to a reference model with evenly-distributed precipitation). These enhanced thickening of the crust takes place particularly in areas of concentrated precipitation and steep slope, i.e., at the upwind flank of the growing plateau. This effect is particularly enhanced at the corners of the indenter (syntaxes). We hypothesize that this may provide clues for better understanding the mechanisms underlying the intriguing tectonic aneurisms documented in the syntaxes of the Himalayas.

  1. Integrating Geochemical and Geodynamic Numerical Models of Mantle Evolution and Plate Tectonics

    NASA Astrophysics Data System (ADS)

    Tackley, P. J.; Xie, S.

    2001-12-01

    The thermal and chemical evolution of Earth's mantle and plates are inextricably coupled by the plate tectonic - mantle convective system. Convection causes chemical differentiation, recycling and mixing, while chemical variations affect the convection through physical properties such as density and viscosity which depend on composition. It is now possible to construct numerical mantle convection models that track the thermo-chemical evolution of major and minor elements, and which can be used to test prospective models and hypotheses regarding Earth's chemical and thermal evolution. Model thermal and chemical structures can be compared to results from seismic tomography, while geochemical signatures (e.g., trace element ratios) can be compared to geochemical observations. The presented, two-dimensional model combines a simplified 2-component major element model with tracking of the most important trace elements, using a tracer method. Melting is self-consistently treated using a solidus, with melt placed on the surface as crust. Partitioning of trace elements occurs between melt and residue. Decaying heat-producing elements and secular cooling of the mantle and core provide the driving heat sources. Pseudo-plastic yielding of the lithosphere gives a first-order approximation of plate tectonics, and also allows planets with a rigid lid or intermittent plate tectonics to be modeled simply by increasing the yield strength. Preliminary models with an initially homogeneous mantle show that regions with a HIMU-like signature can be generated by crustal recycling, and regions with high 3He/4He ratios can be generated by residuum recycling. Outgassing of Argon is within the observed range. Models with initially layered mantles will also be investigated. In future it will be important to include a more realistic bulk compositional model that allows continental crust as well as oceanic crust to form, and to extend the model to three dimensions since toroidal flow may alter

  2. A fully coupled regional atmospheric numerical model for integrated air quality and weather forecasting.

    NASA Astrophysics Data System (ADS)

    Freitas, S. R.; Longo, K. M.; Marecal, V.; Pirre, M.; Gmai, T.

    2012-04-01

    A new numerical modelling tool devoted to local and regional studies of atmospheric chemistry from surface to the lower stratosphere designed for both operational and research purposes will be presented. This model is based on the limited-area model CATT-BRAMS (Coupled Aerosol-Tracer Transport model to the Brazilian developments on the Regional Atmospheric Modeling System, Freitas et al. 2009, Longo et al. 2010) which is a meteorological model (BRAMS) including transport processes of gaseous and aerosols (CATT model). BRAMS is a version of the RAMS model (Walko et al. 2000) adapted to better represent tropical and subtropical processes and several new features. CATT-BRAMS has been used operationally at CPTEC (Brazilian Center for Weather Prediction and Climate Studies) since 2003 providing coupled weather and air quality forecast. In the Chemistry-CATT-BRAMS (called hereafter CCATT-BRAMS) a chemical module is fully coupled to the meteorological/tracer transport model CATT-BRAMS. This module includes gaseous chemistry, photochemistry, scavenging and dry deposition. The CCATT-BRAMS model takes advantages of the BRAMS specific development for the tropics/subtropics and of the recent availability of preprocessing tools for chemical mechanisms and of fast codes for photolysis rates. Similarly to BRAMS this model is conceived to run for horizontal resolutions ranging from a few meters to more than a hundred kilometres depending on the chosen scientific objective. In the last decade CCATT-BRAMS has being broadly (or extensively) used for applications mainly over South America, with strong emphasis over the Amazonia area and the main South American megacities. An overview of the model development and main applications will be presented.

  3. Runge-Kutta type methods with special properties for the numerical integration of ordinary differential equations

    NASA Astrophysics Data System (ADS)

    Kalogiratou, Z.; Monovasilis, Th.; Psihoyios, G.; Simos, T. E.

    2014-03-01

    In this work we review single step methods of the Runge-Kutta type with special properties. Among them are methods specially tuned to integrate problems that exhibit a pronounced oscillatory character and such problems arise often in celestial mechanics and quantum mechanics. Symplectic methods, exponentially and trigonometrically fitted methods, minimum phase-lag and phase-fitted methods are presented. These are Runge-Kutta, Runge-Kutta-Nyström and Partitioned Runge-Kutta methods. The theory of constructing such methods is given as well as several specific methods. In order to present the performance of the methods we have tested 58 methods from all categories. We consider the two dimensional harmonic oscillator, the two body problem, the pendulum problem and the orbital problem studied by Stiefel and Bettis. Also we have tested the methods on the computation of the eigenvalues of the one dimensional time independent Schrödinger equation with the harmonic oscillator, the doubly anharmonic oscillator and the exponential potentials.

  4. Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.; Braithwaite, David W.

    2016-01-01

    In this review, we attempt to integrate two crucial aspects of numerical development: learning the magnitudes of individual numbers and learning arithmetic. Numerical magnitude development involves gaining increasingly precise knowledge of increasing ranges and types of numbers: from non-symbolic to small symbolic numbers, from smaller to larger…

  5. Numerical and Experimental Investigation of Natural Convection in Open-Ended Channels with Application to Building Integrated Photovoltaic (BIPV) Systems

    NASA Astrophysics Data System (ADS)

    Timchenko, V.; Tkachenko, O. A.; Giroux-Julien, S.; Ménézo, C.

    2015-05-01

    Numerical and experimental investigations of the flow and heat transfer in open-ended channel formed by the double skin façade have been undertaken in order to improve understanding of the phenomena and to apply it to passive cooling of building integrated photovoltaic systems. Both uniform heating and non-uniform heating configurations in which heat sources alternated with unheated zones on both skins were studied. Different periodic and asymmetric heating modes have been considered for the same aspect ratio 1/15 of wall distance to wall height and for periodicity 1/15 and 4/15 of heated/unheated zones and heat input, 220 W/m2. In computational study three dimensional transient LES simulation was carried out. It is shown that in comparison to uniformly heating configuration, non-uniformly heating configuration enhances both convective heat transfer and chimney effect.

  6. Numerical simulation of non-viscous liquid pinch off using a coupled level set-boundary integral method

    SciTech Connect

    Garzon, Maria; Sethian, James A.; Gray, Leonard J

    2009-01-01

    Simulations of the pinch off of an inviscid fluid column are carried out based upon a potential flow model with capillary forces. The interface location and the time evolution of the free surface boundary condition are both approximated by means of level set techniques on a fixed domain. The interface velocity is obtained via a Galerkin boundary integral solution of the 3D axisymmetric Laplace equation. A short time analytical solution of the Raleigh-Taylor instability in a liquid column is available, and this result is compared with our numerical experiments to validate the algorithm. The method is capable of handling pinch-off and after pinch-off events, and simulations showing the time evolution of the fluid tube are presented.

  7. Magnitude Knowledge: The Common Core of Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.

    2016-01-01

    The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: 1) representing increasingly precisely the magnitudes of non-symbolic…

  8. Magnitude Knowledge: The Common Core of Numerical Development

    ERIC Educational Resources Information Center

    Siegler, Robert S.

    2016-01-01

    The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: (1) representing increasingly precisely the magnitudes of non-symbolic…

  9. An integrated numerical framework for water quality modelling in cold-region rivers: A case of the lower Athabasca River.

    PubMed

    Shakibaeinia, Ahmad; Kashyap, Shalini; Dibike, Yonas B; Prowse, Terry D

    2016-11-01

    There is a great deal of interest to determine the state and variations of water quality parameters in the lower Athabasca River (LAR) ecosystem, northern Alberta, Canada, due to industrial developments in the region. As a cold region river, the annual cycle of ice cover formation and breakup play a key role in water quality transformation and transportation processes. An integrated deterministic numerical modelling framework is developed and applied for long-term and detailed simulation of the state and variation (spatial and temporal) of major water quality constituents both in open-water and ice covered conditions in the lower Athabasca River (LAR). The framework is based on the a 1D and a 2D hydrodynamic and water quality models externally coupled with the 1D river ice process models to account for the cold season effects. The models are calibrated/validated using available measured data and applied for simulation of dissolved oxygen (DO) and nutrients (i.e., nitrogen and phosphorus). The results show the effect of winter ice cover on reducing the DO concentration, and a fluctuating temporal trend for DO and nutrients during summer periods with substantial differences in concentration between the main channel and flood plains. This numerical frame work can be the basis for future water quality scenario-based studies in the LAR.

  10. An integrated numerical framework for water quality modelling in cold-region rivers: A case of the lower Athabasca River.

    PubMed

    Shakibaeinia, Ahmad; Kashyap, Shalini; Dibike, Yonas B; Prowse, Terry D

    2016-11-01

    There is a great deal of interest to determine the state and variations of water quality parameters in the lower Athabasca River (LAR) ecosystem, northern Alberta, Canada, due to industrial developments in the region. As a cold region river, the annual cycle of ice cover formation and breakup play a key role in water quality transformation and transportation processes. An integrated deterministic numerical modelling framework is developed and applied for long-term and detailed simulation of the state and variation (spatial and temporal) of major water quality constituents both in open-water and ice covered conditions in the lower Athabasca River (LAR). The framework is based on the a 1D and a 2D hydrodynamic and water quality models externally coupled with the 1D river ice process models to account for the cold season effects. The models are calibrated/validated using available measured data and applied for simulation of dissolved oxygen (DO) and nutrients (i.e., nitrogen and phosphorus). The results show the effect of winter ice cover on reducing the DO concentration, and a fluctuating temporal trend for DO and nutrients during summer periods with substantial differences in concentration between the main channel and flood plains. This numerical frame work can be the basis for future water quality scenario-based studies in the LAR. PMID:27376919

  11. Accurate calculation of chemical shifts in highly dynamic H2@C60 through an integrated quantum mechanics/molecular dynamics scheme.

    PubMed

    Jiménez-Osés, Gonzalo; García, José I; Corzana, Francisco; Elguero, José

    2011-05-20

    A new protocol combining classical MD simulations and DFT calculations is presented to accurately estimate the (1)H NMR chemical shifts of highly mobile guest-host systems and their thermal dependence. This strategy has been successfully applied for the hydrogen molecule trapped into C(60) fullerene, an unresolved and challenging prototypical case for which experimental values have never been reproduced. The dependence of the final values on the theoretical method and their implications to avoid over interpretation of the obtained results are carefully described.

  12. Beyond transition state theory: accurate description of nuclear quantum effects on the rate and equilibrium constants of chemical reactions using Feynman path integrals.

    PubMed

    Vanícek, Jirí

    2011-01-01

    Nuclear tunneling and other nuclear quantum effects have been shown to play a significant role in molecules as large as enzymes even at physiological temperatures. I discuss how these quantum phenomena can be accounted for rigorously using Feynman path integrals in calculations of the equilibrium and kinetic isotope effects as well as of the temperature dependence of the rate constant. Because these calculations are extremely computationally demanding, special attention is devoted to increasing the computational efficiency by orders of magnitude by employing efficient path integral estimators.

  13. Numerical study on surface plasmon polariton behaviors in periodic metal-dielectric structures using a plane-wave-assisted boundary integral-equation method.

    PubMed

    Kiang, Yean-Woei; Wang, Jyh-Yang; Yang, C C

    2007-07-01

    A novel hybrid technique based on the boundary integral-equation method is proposed for studying the surface plasmon polariton behaviors in two-dimensional periodic structures. Considering the periodicity property of the problem, we use the plane-wave expansion concept and the periodic boundary condition instead of using the periodic Green's function. The diffraction efficiency can then be readily calculated once the equivalent electric and magnetic currents are solved that avoids invoking the numerical calculation of the radiation integral. The numerical validity is verified with the cases of highly conducting materials and practical metals. Numerical convergence can be easily achieved even in the case of a large incident angle as 80o. Based on the numerical scheme, a metal-dielectric wavy structure is designed for enhancing the transmittance of optical signal through the structure. The excitation of the coupled surface plasmon polaritons for the high transmission is demonstrated.

  14. FeynDyn: A MATLAB program for fast numerical Feynman integral calculations for open quantum system dynamics on GPUs

    NASA Astrophysics Data System (ADS)

    Dattani, Nikesh S.

    2013-12-01

    language: MATLAB R2012a. Computer: See “Operating system”. Operating system: Any operating system that can run MATLAB R2007a or above. Classification: 4.4. Nature of problem: Calculating the dynamics of the reduced density operator of an open quantum system. Solution method: Numerical Feynman integral. Running time: Depends on the input parameters. See the main text for examples.

  15. Integrating a Numerical Taxonomic Method and Molecular Phylogeny for Species Delimitation of Melampsora Species (Melampsoraceae, Pucciniales) on Willows in China.

    PubMed

    Zhao, Peng; Wang, Qing-Hong; Tian, Cheng-Ming; Kakishima, Makoto

    2015-01-01

    The species in genus Melampsora are the causal agents of leaf rust diseases on willows in natural habitats and plantations. However, the classification and recognition of species diversity are challenging because morphological characteristics are scant and morphological variation in Melampsora on willows has not been thoroughly evaluated. Thus, the taxonomy of Melampsora species on willows remains confused, especially in China where 31 species were reported based on either European or Japanese taxonomic systems. To clarify the species boundaries of Melampsora species on willows in China, we tested two approaches for species delimitation inferred from morphological and molecular variations. Morphological species boundaries were determined based on numerical taxonomic analyses of morphological characteristics in the uredinial and telial stages by cluster analysis and one-way analysis of variance. Phylogenetic species boundaries were delineated based on the generalized mixed Yule-coalescent (GMYC) model analysis of the sequences of the internal transcribed spacer (ITS1 and ITS2) regions including the 5.8S and D1/D2 regions of the large nuclear subunit of the ribosomal RNA gene. Numerical taxonomic analyses of 14 morphological characteristics recognized in the uredinial-telial stages revealed 22 morphological species, whereas the GMYC results recovered 29 phylogenetic species. In total, 17 morphological species were in concordance with the phylogenetic species and 5 morphological species were in concordance with 12 phylogenetic species. Both the morphological and molecular data supported 14 morphological characteristics, including 5 newly recognized characteristics and 9 traditionally emphasized characteristics, as effective for the differentiation of Melampsora species on willows in China. Based on the concordance and discordance of the two species delimitation approaches, we concluded that integrative taxonomy by using both morphological and molecular variations was

  16. Integrating a Numerical Taxonomic Method and Molecular Phylogeny for Species Delimitation of Melampsora Species (Melampsoraceae, Pucciniales) on Willows in China

    PubMed Central

    Zhao, Peng; Wang, Qing-Hong; Tian, Cheng-Ming; Kakishima, Makoto

    2015-01-01

    The species in genus Melampsora are the causal agents of leaf rust diseases on willows in natural habitats and plantations. However, the classification and recognition of species diversity are challenging because morphological characteristics are scant and morphological variation in Melampsora on willows has not been thoroughly evaluated. Thus, the taxonomy of Melampsora species on willows remains confused, especially in China where 31 species were reported based on either European or Japanese taxonomic systems. To clarify the species boundaries of Melampsora species on willows in China, we tested two approaches for species delimitation inferred from morphological and molecular variations. Morphological species boundaries were determined based on numerical taxonomic analyses of morphological characteristics in the uredinial and telial stages by cluster analysis and one-way analysis of variance. Phylogenetic species boundaries were delineated based on the generalized mixed Yule-coalescent (GMYC) model analysis of the sequences of the internal transcribed spacer (ITS1 and ITS2) regions including the 5.8S and D1/D2 regions of the large nuclear subunit of the ribosomal RNA gene. Numerical taxonomic analyses of 14 morphological characteristics recognized in the uredinial-telial stages revealed 22 morphological species, whereas the GMYC results recovered 29 phylogenetic species. In total, 17 morphological species were in concordance with the phylogenetic species and 5 morphological species were in concordance with 12 phylogenetic species. Both the morphological and molecular data supported 14 morphological characteristics, including 5 newly recognized characteristics and 9 traditionally emphasized characteristics, as effective for the differentiation of Melampsora species on willows in China. Based on the concordance and discordance of the two species delimitation approaches, we concluded that integrative taxonomy by using both morphological and molecular variations was

  17. Integrating a Numerical Taxonomic Method and Molecular Phylogeny for Species Delimitation of Melampsora Species (Melampsoraceae, Pucciniales) on Willows in China.

    PubMed

    Zhao, Peng; Wang, Qing-Hong; Tian, Cheng-Ming; Kakishima, Makoto

    2015-01-01

    The species in genus Melampsora are the causal agents of leaf rust diseases on willows in natural habitats and plantations. However, the classification and recognition of species diversity are challenging because morphological characteristics are scant and morphological variation in Melampsora on willows has not been thoroughly evaluated. Thus, the taxonomy of Melampsora species on willows remains confused, especially in China where 31 species were reported based on either European or Japanese taxonomic systems. To clarify the species boundaries of Melampsora species on willows in China, we tested two approaches for species delimitation inferred from morphological and molecular variations. Morphological species boundaries were determined based on numerical taxonomic analyses of morphological characteristics in the uredinial and telial stages by cluster analysis and one-way analysis of variance. Phylogenetic species boundaries were delineated based on the generalized mixed Yule-coalescent (GMYC) model analysis of the sequences of the internal transcribed spacer (ITS1 and ITS2) regions including the 5.8S and D1/D2 regions of the large nuclear subunit of the ribosomal RNA gene. Numerical taxonomic analyses of 14 morphological characteristics recognized in the uredinial-telial stages revealed 22 morphological species, whereas the GMYC results recovered 29 phylogenetic species. In total, 17 morphological species were in concordance with the phylogenetic species and 5 morphological species were in concordance with 12 phylogenetic species. Both the morphological and molecular data supported 14 morphological characteristics, including 5 newly recognized characteristics and 9 traditionally emphasized characteristics, as effective for the differentiation of Melampsora species on willows in China. Based on the concordance and discordance of the two species delimitation approaches, we concluded that integrative taxonomy by using both morphological and molecular variations was

  18. Hydro-geophysical observations integration in numerical model: case study in Mediterranean karstic unsaturated zone (Larzac, france)

    NASA Astrophysics Data System (ADS)

    Champollion, Cédric; Fores, Benjamin; Le Moigne, Nicolas; Chéry, Jean

    2016-04-01

    Karstic hydro-systems are highly non-linear and heterogeneous but one of the main water resource in the Mediterranean area. Neither local measurements in boreholes or analysis at the spring can take into account the variability of the water storage. Since a few years, ground-based geophysical measurements (such as gravity, electrical resistivity or seismological data) allows following water storage in heterogeneous hydrosystems at an intermediate scale between boreholes and basin. Behind classical rigorous monitoring, the integration of geophysical data in hydrological numerical models in needed for both processes interpretation and quantification. Since a few years, a karstic geophysical observatory (GEK: Géodésie de l'Environnement Karstique, OSU OREME, SNO H+) has been setup in the Mediterranean area in the south of France. The observatory is surrounding more than 250m karstified dolomite, with an unsaturated zone of ~150m thickness. At the observatory water level in boreholes, evapotranspiration and rainfall are classical hydro-meteorological observations completed by continuous gravity, resistivity and seismological measurements. The main objective of the study is the modelling of the whole observation dataset by explicit unsaturated numerical model in one dimension. Hydrus software is used for the explicit modelling of the water storage and transfer and links the different observations (geophysics, water level, evapotranspiration) with the water saturation. Unknown hydrological parameters (permeability, porosity) are retrieved from stochastic inversions. The scale of investigation of the different observations are discussed thank to the modelling results. A sensibility study of the measurements against the model is done and key hydro-geological processes of the site are presented.

  19. Fully Coriolis-coupled quantum studies of the H + O2 (upsilon i = 0-2, j i = 0,1) --> OH + O reaction on an accurate potential energy surface: integral cross sections and rate constants.

    PubMed

    Lin, Shi Ying; Sun, Zhigang; Guo, Hua; Zhang, Dong Hui; Honvault, Pascal; Xie, Daiqian; Lee, Soo-Y

    2008-01-31

    We present accurate quantum calculations of the integral cross section and rate constant for the H + O2 --> OH + O combustion reaction on a recently developed ab initio potential energy surface using parallelized time-dependent and Chebyshev wavepacket methods. Partial wave contributions up to J = 70 were computed with full Coriolis coupling, which enabled us to obtain the initial state-specified integral cross sections up to 2.0 eV of the collision energy and thermal rate constants up to 3000 K. The integral cross sections show a large reaction threshold due to the quantum endothermicity of the reaction, and they monotonically increase with the collision energy. As a result, the temperature dependence of the rate constant is of the Arrhenius type. In addition, it was found that reactivity is enhanced by reactant vibrational excitation. The calculated thermal rate constant shows a significant improvement over that obtained on the DMBE IV potential, but it still underestimates the experimental consensus.

  20. Inter-annual variability of air pollutants over East Asia: an integrated analysis using satellite, lidar and numerical model.

    NASA Astrophysics Data System (ADS)

    Yumimoto, K.; Uno, I.; Kuribayashi, M.; Miyazaki, K.; Nishizawa, T.

    2014-12-01

    Air quality in East Asia has a drastic temporal and spatial variability. The rapid economic growth in the last three decades enhanced the increase of anthropogenic emission of air pollutions, and caused deterioration of the air quality in both source and downwind regions. The unprecedented heavy PM­2.5 pollution over the central China in January 2013 records the maximum PM2.5 concentration of 996 μg/m3 and raised critical environmental issues (e.g., mortality, human health, social activity and trans-boundary transport, etc.). Recently, efforts to reduce anthropogenic emissions (e.g., emission regulations and improvements of emission factors and removal efficiencies) decelerate their growth rates. In fact, Asian SO2 emission is estimated to be reducing from 2007 [Kurokawa et al., 2013]. However, growth rates other pollutant emissions (e.g., NOx and PM10) still remain in high. To understand the life cycle of pollutants (emission, transport, reaction and deposition) and their temporal and spatial variation, an integrated analysis using observation and numerical model (chemical transport model; CTM) is useful. In this study, we installed a comprehensive observation operator system, which converts model results into observed variables, into the GEOS-Chem CTM. A long-term (2005-2013) full-chemistry simulation over East Asia was performed, and simulation results are translated to tropospheric NO2 and SO2 columns and vertical profiles of aerosol extinction coefficient equivalent to satellite measurements and in-situ lidar network observations. Combining CTM and observations, and integrating analyses of aerosols over the downwind region and their precursors over the source region will provide important insights into temporal and spatial variation of air pollutants over East Asia.

  1. Numerical modeling of elastic waves in inhomogeneous anisotropic media using 3D-elastodynamic finite integration technique

    NASA Astrophysics Data System (ADS)

    Chinta, Prashanth K.; Mayer, K.; Langenberg, K. J.

    2012-05-01

    Nondestructive Evaluation (NDE) of elastic anisotropic media is very complex because of directional dependency of elastic stiffness tensor. Modeling of elastic waves in such materials gives us intuitive knowledge about the propagation and scattering phenomena. The wave propagation in three dimensional space in anisotropic media gives us the deep insight of the transition of the different elastic wave modes i.e. mode conversion, and scattering of these waves because of inhomogeneities present in the material. The numerical tool Three Dimensional-Elastodynamic Finite Integration Technique (3D-EFIT) has been proved to be a very efficient tool for the modeling of elastic waves in very complex geometries. The 3D-EFIT is validated using the analytical approach based on the Radon transform. The simulation results of 3D-EFIT applied to inhomogeneous austenitic steel welds and wood structures are presented. In the first application the geometry consists of an austenitic steel weld that joins two isotropic steel blocks. The vertical transversal isotropic (VTI) austenitic steel is used. The convolutional perfectly matched layers are applied at the boundaries that are supported by isotropic steel. In the second application the wave propagation in the orthotropic wooden structure with an air cavity inside is investigated. The wave propagation results are illustrated using time domain elastic wave snapshots.

  2. Integrated analysis of numerous heterogeneous gene expression profiles for detecting robust disease-specific biomarkers and proposing drug targets.

    PubMed

    Amar, David; Hait, Tom; Izraeli, Shai; Shamir, Ron

    2015-09-18

    Genome-wide expression profiling has revolutionized biomedical research; vast amounts of expression data from numerous studies of many diseases are now available. Making the best use of this resource in order to better understand disease processes and treatment remains an open challenge. In particular, disease biomarkers detected in case-control studies suffer from low reliability and are only weakly reproducible. Here, we present a systematic integrative analysis methodology to overcome these shortcomings. We assembled and manually curated more than 14,000 expression profiles spanning 48 diseases and 18 expression platforms. We show that when studying a particular disease, judicious utilization of profiles from other diseases and information on disease hierarchy improves classification quality, avoids overoptimistic evaluation of that quality, and enhances disease-specific biomarker discovery. This approach yielded specific biomarkers for 24 of the analyzed diseases. We demonstrate how to combine these biomarkers with large-scale interaction, mutation and drug target data, forming a highly valuable disease summary that suggests novel directions in disease understanding and drug repurposing. Our analysis also estimates the number of samples required to reach a desired level of biomarker stability. This methodology can greatly improve the exploitation of the mountain of expression profiles for better disease analysis.

  3. Numerical Prediction of the Performance of Integrated Planar Solid-Oxide Fuel Cells, with Comparisons of Results from Several Codes

    SciTech Connect

    G. L. Hawkes; J. E. O'Brien; B. A. Haberman; A. J. Marquis; C. M. Baca; D. Tripepi; P. Costamagna

    2008-06-01

    A numerical study of the thermal and electrochemical performance of a single-tube Integrated Planar Solid Oxide Fuel Cell (IP-SOFC) has been performed. Results obtained from two finite-volume computational fluid dynamics (CFD) codes FLUENT and SOHAB and from a two-dimensional inhouse developed finite-volume GENOA model are presented and compared. Each tool uses physical and geometric models of differing complexity and comparisons are made to assess their relative merits. Several single-tube simulations were run using each code over a range of operating conditions. The results include polarization curves, distributions of local current density, composition and temperature. Comparisons of these results are discussed, along with their relationship to the respective imbedded phenomenological models for activation losses, fluid flow and mass transport in porous media. In general, agreement between the codes was within 15% for overall parameters such as operating voltage and maximum temperature. The CFD results clearly show the effects of internal structure on the distributions of gas flows and related quantities within the electrochemical cells.

  4. Scattering of electromagnetic radiation based on numerical calculation of the T-matrix through its integral representation

    NASA Astrophysics Data System (ADS)

    Tricoli, Ugo; Pfeilsticker, Klaus

    2014-08-01

    A novel numerical technique is presented to calculate the T-matrix for a single particle through the use of the volume integral equation for electromagnetic scattering. It is based on the method called Coupled Dipole Approximation (CDA), see O. J. F. Martin et al.1. The basic procedure includes the parallel use of the Lippmann-Schwinger and the Dyson equations to iteratively solve for the T-matrix and the Green's function dyadic respectively. The boundary conditions of the particle are thus automatically satisfied. The method can be used for the evaluation of the optical properties (e.g. Müller matrix) of anisotropic, inhomogeneous and asymmetric particles, both in far and near field, giving as output the T-matrix, which depends only on the scatterer itself and is independent from the polarization and direction of the incoming field. Estimation of the accuracy of the method is provided through comparison with the analytical spherical case (Mie theory) as well as non-spherical cubic ice particles.

  5. 3D Numerical Optimization Modelling of Ivancich landslides (Assisi, Italy) via integration of remote sensing and in situ observations.

    NASA Astrophysics Data System (ADS)

    Castaldo, Raffaele; De Novellis, Vincenzo; Lollino, Piernicola; Manunta, Michele; Tizzani, Pietro

    2015-04-01

    The new challenge that the research in slopes instabilities phenomena is going to tackle is the effective integration and joint exploitation of remote sensing measurements with in situ data and observations to study and understand the sub-surface interactions, the triggering causes, and, in general, the long term behaviour of the investigated landslide phenomenon. In this context, a very promising approach is represented by Finite Element (FE) techniques, which allow us to consider the intrinsic complexity of the mass movement phenomena and to effectively benefit from multi source observations and data. In this context, we perform a three dimensional (3D) numerical model of the Ivancich (Assisi, Central Italy) instability phenomenon. In particular, we apply an inverse FE method based on a Genetic Algorithm optimization procedure, benefitting from advanced DInSAR measurements, retrieved through the full resolution Small Baseline Subset (SBAS) technique, and an inclinometric array distribution. To this purpose we consider the SAR images acquired from descending orbit by the COSMO-SkyMed (CSK) X-band radar constellation, from December 2009 to February 2012. Moreover the optimization input dataset is completed by an array of eleven inclinometer measurements, from 1999 to 2006, distributed along the unstable mass. The landslide body is formed of debris material sliding on a arenaceous marl substratum, with a thin shear band detected using borehole and inclinometric data, at depth ranging from 20 to 60 m. Specifically, we consider the active role of this shear band in the control of the landslide evolution process. A large field monitoring dataset of the landslide process, including at-depth piezometric and geological borehole observations, were available. The integration of these datasets allows us to develop a 3D structural geological model of the considered slope. To investigate the dynamic evolution of a landslide, various physical approaches can be considered

  6. Numerical modeling of the 3D dynamics of ultrasound contrast agent microbubbles using the boundary integral method

    NASA Astrophysics Data System (ADS)

    Wang, Qianxi; Manmi, Kawa; Calvisi, Michael L.

    2015-02-01

    Ultrasound contrast agents (UCAs) are microbubbles stabilized with a shell typically of lipid, polymer, or protein and are emerging as a unique tool for noninvasive therapies ranging from gene delivery to tumor ablation. While various models have been developed to describe the spherical oscillations of contrast agents, the treatment of nonspherical behavior has received less attention. However, the nonspherical dynamics of contrast agents are thought to play an important role in therapeutic applications, for example, enhancing the uptake of therapeutic agents across cell membranes and tissue interfaces, and causing tissue ablation. In this paper, a model for nonspherical contrast agent dynamics based on the boundary integral method is described. The effects of the encapsulating shell are approximated by adapting Hoff's model for thin-shell, spherical contrast agents. A high-quality mesh of the bubble surface is maintained by implementing a hybrid approach of the Lagrangian method and elastic mesh technique. The numerical model agrees well with a modified Rayleigh-Plesset equation for encapsulated spherical bubbles. Numerical analyses of the dynamics of UCAs in an infinite liquid and near a rigid wall are performed in parameter regimes of clinical relevance. The oscillation amplitude and period decrease significantly due to the coating. A bubble jet forms when the amplitude of ultrasound is sufficiently large, as occurs for bubbles without a coating; however, the threshold amplitude required to incite jetting increases due to the coating. When a UCA is near a rigid boundary subject to acoustic forcing, the jet is directed towards the wall if the acoustic wave propagates perpendicular to the boundary. When the acoustic wave propagates parallel to the rigid boundary, the jet direction has components both along the wave direction and towards the boundary that depend mainly on the dimensionless standoff distance of the bubble from the boundary. In all cases, the jet

  7. Numerical analysis of wellbore integrity: results from a field study of a natural CO2 reservoir production well

    NASA Astrophysics Data System (ADS)

    Crow, W.; Gasda, S. E.; Williams, D. B.; Celia, M. A.; Carey, J. W.

    2008-12-01

    An important aspect of the risk associated with geological CO2 sequestration is the integrity of existing wellbores that penetrate geological layers targeted for CO2 injection. CO2 leakage may occur through multiple pathways along a wellbore, including through micro-fractures and micro-annuli within the "disturbed zone" surrounding the well casing. The effective permeability of this zone is a key parameter of wellbore integrity required for validation of numerical models. This parameter depends on a number of complex factors, including long-term attack by aggressive fluids, poor well completion and actions related to production of fluids through the wellbore. Recent studies have sought to replicate downhole conditions in the laboratory to identify the mechanisms and rates at which cement deterioration occurs. However, field tests are essential to understanding the in situ leakage properties of the millions of wells that exist in the mature sedimentary basins in North America. In this study, we present results from a field study of a 30-year-old production well from a natural CO2 reservoir. The wellbore was potentially exposed to a 96% CO2 fluid from the time of cement placement, and therefore cement degradation may be a significant factor leading to leakage pathways along this wellbore. A series of downhole tests was performed, including bond logs and extraction of sidewall cores. The cores were analyzed in the laboratory for mineralogical and hydrologic properties. A pressure test was conducted over an 11-ft section of well to determine the extent of hydraulic communication along the exterior of the well casing. Through analysis of this pressure test data, we are able estimate the effective permeability of the disturbed zone along the exterior of wellbore over this 11-ft section. We find the estimated range of effective permeability from the field test is consistent with laboratory analysis and bond log data. The cement interfaces with casing and/or formation are

  8. An Automated High-Throughput Metabolic Stability Assay Using an Integrated High-Resolution Accurate Mass Method and Automated Data Analysis Software.

    PubMed

    Shah, Pranav; Kerns, Edward; Nguyen, Dac-Trung; Obach, R Scott; Wang, Amy Q; Zakharov, Alexey; McKew, John; Simeonov, Anton; Hop, Cornelis E C A; Xu, Xin

    2016-10-01

    Advancement of in silico tools would be enabled by the availability of data for metabolic reaction rates and intrinsic clearance (CLint) of a diverse compound structure data set by specific metabolic enzymes. Our goal is to measure CLint for a large set of compounds with each major human cytochrome P450 (P450) isozyme. To achieve our goal, it is of utmost importance to develop an automated, robust, sensitive, high-throughput metabolic stability assay that can efficiently handle a large volume of compound sets. The substrate depletion method [in vitro half-life (t1/2) method] was chosen to determine CLint The assay (384-well format) consisted of three parts: 1) a robotic system for incubation and sample cleanup; 2) two different integrated, ultraperformance liquid chromatography/mass spectrometry (UPLC/MS) platforms to determine the percent remaining of parent compound, and 3) an automated data analysis system. The CYP3A4 assay was evaluated using two long t1/2 compounds, carbamazepine and antipyrine (t1/2 > 30 minutes); one moderate t1/2 compound, ketoconazole (10 < t1/2 < 30 minutes); and two short t1/2 compounds, loperamide and buspirone (t½ < 10 minutes). Interday and intraday precision and accuracy of the assay were within acceptable range (∼12%) for the linear range observed. Using this assay, CYP3A4 CLint and t1/2 values for more than 3000 compounds were measured. This high-throughput, automated, and robust assay allows for rapid metabolic stability screening of large compound sets and enables advanced computational modeling for individual human P450 isozymes. PMID:27417180

  9. An Automated High-Throughput Metabolic Stability Assay Using an Integrated High-Resolution Accurate Mass Method and Automated Data Analysis Software

    PubMed Central

    Shah, Pranav; Kerns, Edward; Nguyen, Dac-Trung; Obach, R. Scott; Wang, Amy Q.; Zakharov, Alexey; McKew, John; Simeonov, Anton; Hop, Cornelis E. C. A.

    2016-01-01

    Advancement of in silico tools would be enabled by the availability of data for metabolic reaction rates and intrinsic clearance (CLint) of a diverse compound structure data set by specific metabolic enzymes. Our goal is to measure CLint for a large set of compounds with each major human cytochrome P450 (P450) isozyme. To achieve our goal, it is of utmost importance to develop an automated, robust, sensitive, high-throughput metabolic stability assay that can efficiently handle a large volume of compound sets. The substrate depletion method [in vitro half-life (t1/2) method] was chosen to determine CLint. The assay (384-well format) consisted of three parts: 1) a robotic system for incubation and sample cleanup; 2) two different integrated, ultraperformance liquid chromatography/mass spectrometry (UPLC/MS) platforms to determine the percent remaining of parent compound, and 3) an automated data analysis system. The CYP3A4 assay was evaluated using two long t1/2 compounds, carbamazepine and antipyrine (t1/2 > 30 minutes); one moderate t1/2 compound, ketoconazole (10 < t1/2 < 30 minutes); and two short t1/2 compounds, loperamide and buspirone (t½ < 10 minutes). Interday and intraday precision and accuracy of the assay were within acceptable range (∼12%) for the linear range observed. Using this assay, CYP3A4 CLint and t1/2 values for more than 3000 compounds were measured. This high-throughput, automated, and robust assay allows for rapid metabolic stability screening of large compound sets and enables advanced computational modeling for individual human P450 isozymes. PMID:27417180

  10. A two-loop sparse matrix numerical integration procedure for the solution of differential/algebraic equations: Application to multibody systems

    NASA Astrophysics Data System (ADS)

    Shabana, Ahmed A.; Hussein, Bassam A.

    2009-11-01

    In this paper, a two-loop implicit sparse matrix numerical integration (TLISMNI) procedure for the solution of constrained rigid and flexible multibody system differential and algebraic equations is proposed. The proposed method ensures that the kinematic constraint equations are satisfied at the position, velocity and acceleration levels. In this method, a sparse Lagrangian augmented form of the equations of motion that ensures that the constraints are satisfied at the acceleration level is first used to solve for all the accelerations and Lagrange multipliers. The independent coordinates and velocities are then identified and integrated using HTT or Newmark formulas, expressed in this paper in terms of the independent accelerations only. The constraint equations at the position level are then used within an iterative Newton-Raphson procedure to determine the dependent coordinates. The dependent velocities are determined by solving a linear system of algebraic equations. In order to effectively exploit efficient sparse matrix techniques and have minimum storage requirements, a two-loop iterative method is proposed. Equally important, the proposed method avoids the use of numerical differentiation which is commonly associated with the use of implicit integration methods in multibody system algorithms. Numerical examples are presented in order to demonstrate the use of the new integration procedure.

  11. Study on the properties of the Integrated Precipitable Water (IPW) maps derived by GPS, SAR interferometry and numerical forecasting models

    NASA Astrophysics Data System (ADS)

    Mateus, Pedro; Nico, Giovanni; Tomé, Ricardo; Catalão, João.; Miranda, Pedro

    2010-05-01

    The knowledge of spatial distribution of relative changes in atmospheric Integrated Precipitable Water (IPW) density is important for climate studies and numerical weather forecasting. An increase (or decrease) of the IPW density affects the phase of electromagnetic waves. For this reason, this quantity can be measured by techniques such as GPS and space-borne SAR interferometry (InSAR). The aim of this work is to study the isotropic properties of the IPW maps obtained by GPS and SAR InSAR measurements and derived by a Numerical Weather Forecasting Model. The existence of a power law in their phase spectrum is verified. The relationship between the interferometric phase delay and the topographic height of the observed area is also investigated. The Lisbon region, Portugal, was chosen as a study area. This region is monitored by a network of GPS permanent stations covering an area of about squared kilometers. The network consists of 12 GPS stations of which 4 belonging to the Instituto Geográfico Português (IGP) and 8 to Instituto Geográfico do Exercito (IGEOE). All stations were installed between 1997 and the beginning of 2009. The GAMIT package was used to process GPS data and to estimate the total zenith delay with a temporal sampling of 15 minutes. A set of 25 SAR interferograms with a 35-day temporal baseline were processed using ASAR-ENVISAT data acquired over the Lisbon region during the period from 2003 to 2005 and from 2008 to 2009. These interferograms give an estimate of the variation of the global atmospheric delay. Terrain deformations related to known geological phenomena in the Lisbon area are negligible at this time scale of 35 days. Furthermore, two interferometric SAR images acquired by ERS-1/2 over the Lisbon region on 20/07/1995 and 21/07/1995, respectively, and so with a temporal baseline of just 1 day, were also processed. The Weather Research & Forecasting Model (WRF) was used to generate the three-dimensional fields of temperature

  12. Integrated numerical modeling of a landslide early warning system in a context of adaptation to future climatic pressures

    NASA Astrophysics Data System (ADS)

    Khabarov, Nikolay; Huggel, Christian; Obersteiner, Michael; Ramírez, Juan Manuel

    2010-05-01

    Mountain regions are typically characterized by rugged terrain which is susceptible to different types of landslides during high-intensity precipitation. Landslides account for billions of dollars of damage and many casualties, and are expected to increase in frequency in the future due to a projected increase of precipitation intensity. Early warning systems (EWS) are thought to be a primary tool for related disaster risk reduction and climate change adaptation to extreme climatic events and hydro-meteorological hazards, including landslides. An EWS for hazards such as landslides consist of different components, including environmental monitoring instruments (e.g. rainfall or flow sensors), physical or empirical process models to support decision-making (warnings, evacuation), data and voice communication, organization and logistics-related procedures, and population response. Considering this broad range, EWS are highly complex systems, and it is therefore difficult to understand the effect of the different components and changing conditions on the overall performance, ultimately being expressed as human lives saved or structural damage reduced. In this contribution we present a further development of our approach to assess a landslide EWS in an integral way, both at the system and component level. We utilize a numerical model using 6 hour rainfall data as basic input. A threshold function based on a rainfall-intensity/duration relation was applied as a decision criterion for evacuation. Damage to infrastructure and human lives was defined as a linear function of landslide magnitude, with the magnitude modelled using a power function of landslide frequency. Correct evacuation was assessed with a ‘true' reference rainfall dataset versus a dataset of artificially reduced quality imitating the observation system component. Performance of the EWS using these rainfall datasets was expressed in monetary terms (i.e. damage related to false and correct evacuation). We

  13. Numerical Evaluation of the "Dual-Kernel Counter-flow" Matric Convolution Integral that Arises in Discrete/Continuous (D/C) Control Theory

    NASA Technical Reports Server (NTRS)

    Nixon, Douglas D.

    2009-01-01

    Discrete/Continuous (D/C) control theory is a new generalized theory of discrete-time control that expands the concept of conventional (exact) discrete-time control to create a framework for design and implementation of discretetime control systems that include a continuous-time command function generator so that actuator commands need not be constant between control decisions, but can be more generally defined and implemented as functions that vary with time across sample period. Because the plant/control system construct contains two linear subsystems arranged in tandem, a novel dual-kernel counter-flow convolution integral appears in the formulation. As part of the D/C system design and implementation process, numerical evaluation of that integral over the sample period is required. Three fundamentally different evaluation methods and associated algorithms are derived for the constant-coefficient case. Numerical results are matched against three available examples that have closed-form solutions.

  14. A numerical algorithm for stress integration of a fiber-fiber kinetics model with Coulomb friction for connective tissue

    NASA Astrophysics Data System (ADS)

    Kojic, M.; Mijailovic, S.; Zdravkovic, N.

    Complex behaviour of connective tissue can be modeled by a fiber-fiber kinetics material model introduced in Mijailovic (1991), Mijailovic et al. (1993). The model is based on the hypothesis of sliding of elastic fibers with Coulomb and viscous friction. The main characteristics of the model were verified experimentally in Mijailovic (1991), and a numerical procedure for one-dimensional tension was developed considering sliding as a contact problem between bodies. In this paper we propose a new and general numerical procedure for calculation of the stress-strain law of the fiber-fiber kinetics model in case of Coulomb friction. Instead of using a contact algorithm (Mijailovic 1991), which is numerically inefficient and never enough reliable, here the history of sliding along the sliding length is traced numerically through a number of segments along the fiber. The algorithm is simple, efficient and reliable and provides solutions for arbitrary cyclic loading, including tension, shear, and tension and shear simultaneously, giving hysteresis loops typical for soft tissue response. The model is built in the finite element technique, providing the possibility of its application to general and real problems. Solved examples illustrate the main characteristics of the model and of the developed numerical method, as well as its applicability to practical problems. Accuracy of some results, for the simple case of uniaxial loading, is verified by comparison with analytical solutions.

  15. Numerical Simulations of Ion Cloud Dynamics

    NASA Astrophysics Data System (ADS)

    Sillitoe, Nicolas; Hilico, Laurent

    We explain how to perform accurate numerical simulations of ion cloud dynamics by discussing the relevant orders of magnitude of the characteristic times and frequencies involved in the problem and the computer requirement with respect to the ion cloud size. We then discuss integration algorithms and Coulomb force parallelization. We finally explain how to take into account collisions, cooling laser interaction and chemical reactions in a Monte Carlo approach and discuss how to use random number generators to that end.

  16. Integral quantification of contaminant mass flow rates in a contaminated aquifer: conditioning of the numerical inversion of concentration-time series.

    PubMed

    Herold, Maria; Ptak, Thomas; Bayer-Raich, Marti; Wendel, Thomas; Grathwohl, Peter

    2009-04-15

    A series of integral pumping tests (IPTs) has been conducted at a former gasworks site to quantify the contaminant mass flow rates and average concentration in groundwater along three control planes across the groundwater flow direction. The measured concentration-time series were analysed numerically with the help of the inversion code CSTREAM and a flow and transport model representing the highly heterogeneous aquifer. Since the control planes cover the entire downstream width of the potentially contaminated area, they allow conclusions to be drawn about the current location and spread of the contaminant plume. Previous evaluations of integral pumping tests could calculate three scenarios concerning the spread of the plume around the IPT well: (i) the plume is located to the right of the pumping well, (ii) to the left, or (iii) is distributed symmetrically around it. To create a more realistic picture of the plume position, a series of direct-push monitoring wells were installed along one control plane. The concentrations found in these wells were included in the numerical analysis to condition the numerical inversion results, and allowed the identification of a more pronounced plume centre and fringe, which supports the development of optimised remediation strategies. PMID:19167131

  17. Integral quantification of contaminant mass flow rates in a contaminated aquifer: Conditioning of the numerical inversion of concentration-time series

    NASA Astrophysics Data System (ADS)

    Herold, Maria; Ptak, Thomas; Bayer-Raich, Marti; Wendel, Thomas; Grathwohl, Peter

    2009-04-01

    A series of integral pumping tests (IPTs) has been conducted at a former gasworks site to quantify the contaminant mass flow rates and average concentration in groundwater along three control planes across the groundwater flow direction. The measured concentration-time series were analysed numerically with the help of the inversion code CSTREAM and a flow and transport model representing the highly heterogeneous aquifer. Since the control planes cover the entire downstream width of the potentially contaminated area, they allow conclusions to be drawn about the current location and spread of the contaminant plume. Previous evaluations of integral pumping tests could calculate three scenarios concerning the spread of the plume around the IPT well: (i) the plume is located to the right of the pumping well, (ii) to the left, or (iii) is distributed symmetrically around it. To create a more realistic picture of the plume position, a series of direct-push monitoring wells were installed along one control plane. The concentrations found in these wells were included in the numerical analysis to condition the numerical inversion results, and allowed the identification of a more pronounced plume centre and fringe, which supports the development of optimised remediation strategies.

  18. Integral quantification of contaminant mass flow rates in a contaminated aquifer: conditioning of the numerical inversion of concentration-time series.

    PubMed

    Herold, Maria; Ptak, Thomas; Bayer-Raich, Marti; Wendel, Thomas; Grathwohl, Peter

    2009-04-15

    A series of integral pumping tests (IPTs) has been conducted at a former gasworks site to quantify the contaminant mass flow rates and average concentration in groundwater along three control planes across the groundwater flow direction. The measured concentration-time series were analysed numerically with the help of the inversion code CSTREAM and a flow and transport model representing the highly heterogeneous aquifer. Since the control planes cover the entire downstream width of the potentially contaminated area, they allow conclusions to be drawn about the current location and spread of the contaminant plume. Previous evaluations of integral pumping tests could calculate three scenarios concerning the spread of the plume around the IPT well: (i) the plume is located to the right of the pumping well, (ii) to the left, or (iii) is distributed symmetrically around it. To create a more realistic picture of the plume position, a series of direct-push monitoring wells were installed along one control plane. The concentrations found in these wells were included in the numerical analysis to condition the numerical inversion results, and allowed the identification of a more pronounced plume centre and fringe, which supports the development of optimised remediation strategies.

  19. An integrated approach to flood hazard assessment on alluvial fans using numerical modeling, field mapping, and remote sensing

    USGS Publications Warehouse

    Pelletier, J.D.; Mayer, L.; Pearthree, P.A.; House, P.K.; Demsey, K.A.; Klawon, J.K.; Vincent, K.R.

    2005-01-01

    Millions of people in the western United States live near the dynamic, distributary channel networks of alluvial fans where flood behavior is complex and poorly constrained. Here we test a new comprehensive approach to alluvial-fan flood hazard assessment that uses four complementary methods: two-dimensional raster-based hydraulic modeling, satellite-image change detection, fieldbased mapping of recent flood inundation, and surficial geologic mapping. Each of these methods provides spatial detail lacking in the standard method and each provides critical information for a comprehensive assessment. Our numerical model simultaneously solves the continuity equation and Manning's equation (Chow, 1959) using an implicit numerical method. It provides a robust numerical tool for predicting flood flows using the large, high-resolution Digital Elevation Models (DEMs) necessary to resolve the numerous small channels on the typical alluvial fan. Inundation extents and flow depths of historic floods can be reconstructed with the numerical model and validated against field- and satellite-based flood maps. A probabilistic flood hazard map can also be constructed by modeling multiple flood events with a range of specified discharges. This map can be used in conjunction with a surficial geologic map to further refine floodplain delineation on fans. To test the accuracy of the numerical model, we compared model predictions of flood inundation and flow depths against field- and satellite-based flood maps for two recent extreme events on the southern Tortolita and Harquahala piedmonts in Arizona. Model predictions match the field- and satellite-based maps closely. Probabilistic flood hazard maps based on the 10 yr, 100 yr, and maximum floods were also constructed for the study areas using stream gage records and paleoflood deposits. The resulting maps predict spatially complex flood hazards that strongly reflect small-scale topography and are consistent with surficial geology. In

  20. Numerical Asymptotic Solutions Of Differential Equations

    NASA Technical Reports Server (NTRS)

    Thurston, Gaylen A.

    1992-01-01

    Numerical algorithms derived and compared with classical analytical methods. In method, expansions replaced with integrals evaluated numerically. Resulting numerical solutions retain linear independence, main advantage of asymptotic solutions.

  1. A control volume method on an icosahedral grid for numerical integration of the shallow-water equations on the sphere

    SciTech Connect

    Chern, I-Liang

    1994-08-01

    Two versions of a control volume method on a symmetrized icosahedral grid are proposed for solving the shallow-water equations on a sphere. One version expresses of the equations in the 3-D Cartersian coordinate system, while the other expresses the equations in the northern/southern polar sterographic coordinate systems. The pole problem is avoided because of these expressions in both versions and the quasi-homogenity of the icosahedral grid. Truncation errors and convergence tests of the numerical gradient and divergent operators associated with this method are studied. A convergence tests of the numerical gradient and divergent operators associated with this method are studied. A convergence test for a steady zonal flow is demonstrated. Several simulations of Rossby-Haurwitz waves with various numbers are also performed.

  2. Numerical integration of gravitational field for general three-dimensional objects and its application to gravitational study of grand design spiral arm structure

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2016-08-01

    We present a method to integrate the gravitational field for general three-dimensional objects. By adopting the spherical polar coordinates centered at the evaluation point as the integration variables, we numerically compute the volume integral representation of the gravitational potential and of the acceleration vector. The variable transformation completely removes the algebraic singularities of the original integrals. The comparison with exact solutions reveals around 15 digits accuracy of the new method. Meanwhile, the 6 digit accuracy of the integrated gravitational field is realized by around 106 evaluations of the integrand per evaluation point, which costs at most a few seconds at a PC with Intel Core i7-4600U CPU running at 2.10 GHz clock. By using the new method, we show the gravitational field of a grand design spiral arm structure as an example. The computed gravitational field shows not only spiral shaped details but also a global feature composed of a thick oblate spheroid and a thin disc. The developed method is directly applicable to the electromagnetic field computation by means of Coulomb's law, the Biot-Savart law, and their retarded extensions. Sample FORTRAN 90 programs and test results are electronically available.

  3. Numerical computation of complex multi-body Navier-Stokes flows with applications for the integrated Space Shuttle launch vehicle

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    1993-01-01

    An enhanced grid system for the Space Shuttle Orbiter was built by integrating CAD definitions from several sources and then generating the surface and volume grids. The new grid system contains geometric components not modeled previously plus significant enhancements on geometry that has been modeled in the old grid system. The new orbiter grids were then integrated with new grids for the rest of the launch vehicle. Enhancements were made to the hyperbolic grid generator HYPGEN and new tools for grid projection, manipulation, and modification, Cartesian box grid and far field grid generation and post-processing of flow solver data were developed.

  4. A numerical method for integrating the kinetic equations of droplet spectra evolution by condensation/evaporation and by coalescence/breakup processes

    NASA Technical Reports Server (NTRS)

    Emukashvily, I. M.

    1982-01-01

    An extension of the method of moments is developed for the numerical integration of the kinetic equations of droplet spectra evolution by condensation/evaporation and by coalescence/breakup processes. The number density function n sub k (x,t) in each separate droplet packet between droplet mass grid points (x sub k, x sub k+1) is represented by an expansion in orthogonal polynomials with a given weighting function. In this way droplet number concentrations, liquid water contents and other moments in each droplet packet are conserved and the problem of solving the kinetic equations is replaced by one of solving a set of coupled differential equations for the number density function moments. The method is tested against analytic solutions of the corresponding kinetic equations. Numerical results are obtained for different coalescence/breakup and condensation/evaporation kernels and for different initial droplet spectra. Also droplet mass grid intervals, weighting functions, and time steps are varied.

  5. Solar Radiation and the UV Index: An Application of Numerical Integration, Trigonometric Functions, Online Education and the Modelling Process

    ERIC Educational Resources Information Center

    Downs, Nathan; Parisi, Alfio V.; Galligan, Linda; Turner, Joanna; Amar, Abdurazaq; King, Rachel; Ultra, Filipina; Butler, Harry

    2016-01-01

    A short series of practical classroom mathematics activities employing the use of a large and publicly accessible scientific data set are presented for use by students in years 9 and 10. The activities introduce and build understanding of integral calculus and trigonometric functions through the presentation of practical problem solving that…

  6. Numerical study identifying the factors causing the significant underestimation of the specific discharge estimated using the modified integral pumping test method in a laboratory experiment

    NASA Astrophysics Data System (ADS)

    Sun, Kerang

    2015-09-01

    A three-dimensional finite element model is constructed to simulate the experimental conditions presented in a paper published in this journal [Goltz et al., 2009. Validation of two innovative methods to measure contaminant mass flux in groundwater. Journal of Contaminant Hydrology 106 (2009) 51-61] where the modified integral pumping test (MIPT) method was found to significantly underestimate the specific discharge in an artificial aquifer. The numerical model closely replicates the experimental configuration with explicit representation of the pumping well column and skin, allowing for the model to simulate the wellbore flow in the pumping well as an integral part of the porous media flow in the aquifer using the equivalent hydraulic conductivity approach. The equivalent hydraulic conductivity is used to account for head losses due to friction within the wellbore of the pumping well. Applying the MIPT method on the model simulated piezometric heads resulted in a specific discharge that underestimates the true specific discharge in the experimental aquifer by 18.8%, compared with the 57% underestimation of mass flux by the experiment reported by Goltz et al. (2009). Alternative simulation shows that the numerical model is capable of approximately replicating the experiment results when the equivalent hydraulic conductivity is reduced by an order of magnitude, suggesting that the accuracy of the MIPT estimation could be improved by expanding the physical meaning of the equivalent hydraulic conductivity to account for other factors such as orifice losses in addition to frictional losses within the wellbore. Numerical experiments also show that when applying the MIPT method to estimate hydraulic parameters, use of depth-integrated piezometric head instead of the head near the pump intake can reduce the estimation error resulting from well losses, but not the error associated with the well not being fully screened.

  7. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  8. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  9. Definition of a geometric model for landslide numerical modeling from the integration of multi-source geophysical data.

    NASA Astrophysics Data System (ADS)

    Gance, Julien; Bernardie, Séverine; Grandjean, Gilles; Malet, Jean-Philippe

    2014-05-01

    Landslide hazard can be assessed through numerical hydro-mechanical models. These methods require different input data such as a geometric model, rheological constitutive laws and associated hydro-mechanical parameters, and boundary conditions. The objective of this study is to fill the gap existing between geophysical and engineering communities. This gap prevents the engineering community to use the full information available in geophysical imagery. A landslide geometrical model contains information on the geometry and extent of the different geotechnical units of the landslide, and describes the layering and the discontinuities. It is generally drawn from punctual geotechnical tests, using interpolation, or better, from the combined use of a geotechnical test and the iso-value of geophysical tomographies. In this context, we propose to use a multi-source geophysical data fusion strategy as an aid for the construction of landslide geometric models. Based on a fuzzy logic data fusion method, we propose to use different geophysical tomographies and their associated uncertainty and sensitivity tomograms to design a "probable" geometric model. This strategy is tested on a profile of the Super-Sauze landslide using P-wave velocity, P-wave attenuation and electrical resistivity tomography. We construct a probable model and a true model for numerical modeling. Using basic elastic constitutive laws, we show that the model geometry is sufficiently detailed to simulate the complex surface displacements pattern.

  10. PTMSearchPlus: Software Tool for Automated Protein Identification and Post-Translational and Post-Translational Modification Characterization by Integrating Accurate Intact Protein Mass and Bottom-Up Mass Spectrometric Data Searches

    SciTech Connect

    Kertesz, Vilmos; Connelly, Heather M; Erickson, Brian K; Hettich, Robert {Bob} L

    2009-01-01

    PTMSearchPlus is a software tool for the automated integration of accurate intact protein mass (AIPM) and bottom-up (BU) mass spectra searches/data in order to both confidently identify the intact proteins and to characterize their post-translational modifications (PTMs). The development of PTMSearchPlus was motivated by the desire to effectively integrate high-resolution intact protein molecular masses with bottom-up peptide MS/MS data. PTMSearchPlus requires as input both intact protein and proteolytic peptide mass spectra collected from the same protein mixture, a FASTA protein database, and a selection of possible PTMs, the types and ranges of which can be specified. The output of PTMSearchPlus is a list of intact and modified proteins matching the AIPM data concomitant with their respective peptides found by the BU search. This list also contains protein and peptide sequence coverage information, scores, etc. that can be used for further evaluation or refiltering of the results. Corresponding and annotated AIPM and BU mass spectra are also displayed for visual inspection when a listed protein or a peptide is selected. These and other controls ensure that the user can manually evaluate, modify (e.g., remove obvious false positives, low quality spectra etc.), and save the results of the automated search if necessary. Driven by the exponential growth in the number of possible peptide candidates in a BU search when multiple PTMs are probed, the advantages on search speed by limiting the total number of possible PTMs on a peptide in the BU search or by performing an AIPM predicted BU search are also discussed in addition to the integration approach. The features of PTMSearchPlus are demonstrated using both a protein standard mixture and a complex protein mixture from Escherichia coli. Experimental data revealed a unique advantage of coupling AIPM and the BU data sets that is mutually beneficial for both approaches. Namely, AIPM data can confirm that no PTM peptides

  11. PTMSearchPlus: A Software Tool for Automated Protein Identification and Post-Translational Modification Characterization by Integrating Accurate Intact Protein Mass and Bottom-Up Mass Spectrometric Data Searches

    SciTech Connect

    Kertesz, Vilmos; Connelly, Heather M; Erickson, Brian K; Hettich, Robert {Bob} L

    2009-01-01

    PTMSearchPlus is a software tool for the automated integration of accurate intact protein mass (AIPM) and bottom-up (BU) mass spectra searches/data in order to both confidently identify the intact proteins and to characterize their post-translational modifications (PTMs). The development of PTMSearchPlus was motivated by the desire to effectively integrate high resolution intact protein molecular masses with bottom-up peptide MS/MS data. PTMSearchPlus requires as input both intact protein and proteolytic peptide mass spectra collected from the same protein mixture, a FASTA protein database, and a selection of possible PTMs, the types and ranges of which can be specified. The output of PTMSearchPlus is a list of intact and modified proteins matching the AIPM data concomitant with their respective peptides found by the BU search. This list also contains protein and peptide sequence coverage information, scores, etc. that can be used for further evaluation or refiltering of the results. Corresponding and annotated AIPM and BU mass spectra are also displayed for visual inspection when a listed protein or a peptide is selected. These and other controls ensure that the user can manually evaluate, modify (e.g. remove obvious false positives, low quality spectra etc.), and save the results of the automated search if necessary. Driven by the exponential growth in the number of possible peptide candidates in a BU search when multiple PTMs are probed, the advantages on search speed by limiting the total number of possible PTMs on a peptide in the BU search or by performing an AIPM predicted BU search are also discussed in addition to the integration approach. The features of PTMSearchPlus are demonstrated using both a protein standard mixture and a complex protein mixture from Escherichia coli. Experimental data revealed a unique advantage of coupling AIPM and the BU datasets that is mutually beneficial for both approaches. Namely, AIPM data can confirm that no PTM peptides

  12. Integration of bed characteristics, geochemical tracers, current measurements, and numerical modeling for assessing the provenance of beach sand in the San Francisco Bay Coastal System

    USGS Publications Warehouse

    Barnard, Patrick L.; Foxgrover, Amy; Elias, Edwin P.L.; Erikson, Li H.; Hein, James; McGann, Mary; Mizell, Kira; Rosenbauer, Robert J.; Swarzenski, Peter W.; Takesue, Renee K.; Wong, Florence L.; Woodrow, Don

    2013-01-01

    Over 150 million m3 of sand-sized sediment has disappeared from the central region of the San Francisco Bay Coastal System during the last half century. This enormous loss may reflect numerous anthropogenic influences, such as watershed damming, bay-fill development, aggregate mining, and dredging. The reduction in Bay sediment also appears to be linked to a reduction in sediment supply and recent widespread erosion of adjacent beaches, wetlands, and submarine environments. A unique, multi-faceted provenance study was performed to definitively establish the primary sources, sinks, and transport pathways of beach-sized sand in the region, thereby identifying the activities and processes that directly limit supply to the outer coast. This integrative program is based on comprehensive surficial sediment sampling of the San Francisco Bay Coastal System, including the seabed, Bay floor, area beaches, adjacent rock units, and major drainages. Analyses of sample morphometrics and biological composition (e.g., Foraminifera) were then integrated with a suite of tracers including 87Sr/86Sr and 143Nd/144Nd isotopes, rare earth elements, semi-quantitative X-ray diffraction mineralogy, and heavy minerals, and with process-based numerical modeling, in situ current measurements, and bedform asymmetry to robustly determine the provenance of beach-sized sand in the region.

  13. Integration of bed characteristics, geochemical tracers, current measurements, and numerical modeling for assessing the provenance of beach sand in the San Francisco Bay coastal system

    USGS Publications Warehouse

    Barnard, Patrick L.; Foxgrover, Amy C.; Elias, Edwin P.L.; Erikson, Li H.; Hein, James R.; McGann, Mary; Mizell, Kira; Rosenbauer, Robert J.; Swarzenski, Peter W.; Takesue, Renee K.; Wong, Florence L.; Woodrow, Donald L.; Barnard, P.L.; Jaffee, B.E.; Schoellhamer, D.H.

    2013-01-01

    Over 150 million m3 of sand-sized sediment has disappeared from the central region of the San Francisco Bay Coastal System during the last half century. This enormous loss may reflect numerous anthropogenic influences, such as watershed damming, bay-fill development, aggregate mining, and dredging. The reduction in Bay sediment also appears to be linked to a reduction in sediment supply and recent widespread erosion of adjacent beaches, wetlands, and submarine environments. A unique, multi-faceted provenance study was performed to definitively establish the primary sources, sinks, and transport pathways of beach-sized sand in the region, thereby identifying the activities and processes that directly limit supply to the outer coast. This integrative program is based on comprehensive surficial sediment sampling of the San Francisco Bay Coastal System, including the seabed, Bay floor, area beaches, adjacent rock units, and major drainages. Analyses of sample morphometrics and biological composition (e.g., Foraminifera) were then integrated with a suite of tracers including 87Sr/86Sr and 143Nd/144Nd isotopes, rare earth elements, semi-quantitative X-ray diffraction mineralogy, and heavy minerals, and with process-based numerical modeling, in situ current measurements, and bedform asymmetry to robustly determine the provenance of beach-sized sand in the region.

  14. Volume calculation of subsurface structures and traps in hydrocarbon exploration — a comparison between numerical integration and cell based models

    NASA Astrophysics Data System (ADS)

    Slavinić, Petra; Cvetković, Marko

    2016-01-01

    The volume calculation of geological structures is one of the primary goals of interest when dealing with exploration or production of oil and gas in general. Most of those calculations are done using advanced software packages but still the mathematical workflow (equations) has to be used and understood for the initial volume calculation process. In this paper a comparison is given between bulk volume calculations of geological structures using trapezoidal and Simpson's rule and the ones obtained from cell-based models. Comparison in calculation is illustrated with four models; dome - 1/2 of ball/sphere, elongated anticline, stratigraphic trap due to lateral facies change and faulted anticline trap. Results show that Simpson's and trapezoidal rules give a very accurate volume calculation even with a few inputs(isopach areas - ordinates). A test of cell based model volume calculation precision against grid resolution is presented for various cases. For high accuracy, less the 1% of an error from coarsening, a cell area has to be 0.0008% of the reservoir area

  15. Performance of heterogeneous computing with graphics processing unit and many integrated core for hartree potential calculations on a numerical grid.

    PubMed

    Choi, Sunghwan; Kwon, Oh-Kyoung; Kim, Jaewook; Kim, Woo Youn

    2016-09-15

    We investigated the performance of heterogeneous computing with graphics processing units (GPUs) and many integrated core (MIC) with 20 CPU cores (20×CPU). As a practical example toward large scale electronic structure calculations using grid-based methods, we evaluated the Hartree potentials of silver nanoparticles with various sizes (3.1, 3.7, 4.9, 6.1, and 6.9 nm) via a direct integral method supported by the sinc basis set. The so-called work stealing scheduler was used for efficient heterogeneous computing via the balanced dynamic distribution of workloads between all processors on a given architecture without any prior information on their individual performances. 20×CPU + 1GPU was up to ∼1.5 and ∼3.1 times faster than 1GPU and 20×CPU, respectively. 20×CPU + 2GPU was ∼4.3 times faster than 20×CPU. The performance enhancement by CPU + MIC was considerably lower than expected because of the large initialization overhead of MIC, although its theoretical performance is similar with that of CPU + GPU. © 2016 Wiley Periodicals, Inc. PMID:27431905

  16. Performance of heterogeneous computing with graphics processing unit and many integrated core for hartree potential calculations on a numerical grid.

    PubMed

    Choi, Sunghwan; Kwon, Oh-Kyoung; Kim, Jaewook; Kim, Woo Youn

    2016-09-15

    We investigated the performance of heterogeneous computing with graphics processing units (GPUs) and many integrated core (MIC) with 20 CPU cores (20×CPU). As a practical example toward large scale electronic structure calculations using grid-based methods, we evaluated the Hartree potentials of silver nanoparticles with various sizes (3.1, 3.7, 4.9, 6.1, and 6.9 nm) via a direct integral method supported by the sinc basis set. The so-called work stealing scheduler was used for efficient heterogeneous computing via the balanced dynamic distribution of workloads between all processors on a given architecture without any prior information on their individual performances. 20×CPU + 1GPU was up to ∼1.5 and ∼3.1 times faster than 1GPU and 20×CPU, respectively. 20×CPU + 2GPU was ∼4.3 times faster than 20×CPU. The performance enhancement by CPU + MIC was considerably lower than expected because of the large initialization overhead of MIC, although its theoretical performance is similar with that of CPU + GPU. © 2016 Wiley Periodicals, Inc.

  17. Towards an integrated numerical simulator for crack-seal vein microstructure: Coupling phase-field with the Discrete Element Method

    NASA Astrophysics Data System (ADS)

    Virgo, Simon; Ankit, Kumar; Nestler, Britta; Urai, Janos L.

    2016-04-01

    Crack-seal veins form in a complex interplay of coupled thermal, hydraulic, mechanical and chemical processes. Their formation and cyclic growth involves brittle fracturing and dilatancy, phases of increased fluid flow and the growth of crystals that fill the voids and reestablish the mechanical strength. Existing numerical models of vein formation focus on selected aspects of the coupled process. Until today, no model exists that is able to use a realistic representation of the fracturing AND sealing processes, simultaneously. To address this challenge, we propose the bidirectional coupling of two numerical methods that have proven themselves as very powerful to model the fundamental processes acting in crack-seal systems: Phase-field and the Discrete Element Method (DEM). The phase-field Method was recently successfully extended to model the precipitation of quartz crystals from an aqueous solution and applied to model the sealing of a vein over multiple opening events (Ankit et al., 2013; Ankit et al., 2015a; Ankit et al., 2015b). The advantage over former, purely kinematic approaches is that in phase-field, the crystal growth is modeled based on thermodynamic and kinetic principles. Different driving forces for microstructure evolution, such as chemical bulk free energy, interfacial energy, elastic strain energy and different transport processes, such as mass diffusion and advection, can be coupled and the effect on the evolution process can be studied in 3D. The Discrete Element Method was already used in several studies to model the fracturing of rocks and the incremental growth of veins by repeated fracturing (Virgo et al., 2013; Virgo et al., 2014). Materials in DEM are represented by volumes of packed spherical particles and the response to the material to stress is modeled by interaction of the particles with their nearest neighbours. For rocks, in 3D, the method provides a realistic brittle failure behaviour. Exchange Routines are being developed that

  18. Predictive Modeling of Chemical Hazard by Integrating Numerical Descriptors of Chemical Structures and Short-term Toxicity Assay Data

    PubMed Central

    Rusyn, Ivan; Sedykh, Alexander; Guyton, Kathryn Z.; Tropsha, Alexander

    2012-01-01

    Quantitative structure-activity relationship (QSAR) models are widely used for in silico prediction of in vivo toxicity of drug candidates or environmental chemicals, adding value to candidate selection in drug development or in a search for less hazardous and more sustainable alternatives for chemicals in commerce. The development of traditional QSAR models is enabled by numerical descriptors representing the inherent chemical properties that can be easily defined for any number of molecules; however, traditional QSAR models often have limited predictive power due to the lack of data and complexity of in vivo endpoints. Although it has been indeed difficult to obtain experimentally derived toxicity data on a large number of chemicals in the past, the results of quantitative in vitro screening of thousands of environmental chemicals in hundreds of experimental systems are now available and continue to accumulate. In addition, publicly accessible toxicogenomics data collected on hundreds of chemicals provide another dimension of molecular information that is potentially useful for predictive toxicity modeling. These new characteristics of molecular bioactivity arising from short-term biological assays, i.e., in vitro screening and/or in vivo toxicogenomics data can now be exploited in combination with chemical structural information to generate hybrid QSAR–like quantitative models to predict human toxicity and carcinogenicity. Using several case studies, we illustrate the benefits of a hybrid modeling approach, namely improvements in the accuracy of models, enhanced interpretation of the most predictive features, and expanded applicability domain for wider chemical space coverage. PMID:22387746

  19. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  20. A numerical model of continental-scale topographic evolution integrating thin sheet tectonics, river transport, and orographic precipitation

    NASA Astrophysics Data System (ADS)

    Garcia-Castellanos, Daniel; Jimenez-Munt, Ivone

    2014-05-01

    How much does the erosion and sedimentation at the crust's surface influence on the patterns and distribution of tectonic deformation? This question has been mostly addressed from a numerical modelling perspective, at scales ranging from local to orogenic. Here we present a model that aims at constraining this phenomenon at the continental scale. With this purpose, we couple a thin-sheet viscous model of continental deformation with a stream-power surface transport model. The model also incorporates flexural isostatic compensation that permits the formation of large sedimentary foreland basins and a precipitation model that reproduces basic climatic effects such as continentality and orographic rainfall and rain shadow. We quantify the feedbacks between these 4 processes in a synthetic scenario inspired by the India-Asia collision. The model reproduces first-order characteristics of the growth of the Tibetan Plateau as a result of the Indian indentation. A large intramountain basin (comparable to the Tarim Basin) develops when predefining a hard inherited area in the undeformed foreland (Asia). The amount of sediment trapped in it is very sensitive to climatic parameters, particularly to evaporation, because it crucially determines its endorheic/exorheic drainage. We identify some degree of feedback between the deep and the surface processes occurs, leading locally to a <20% increase in deformation rates if orographic precipitation is account for (relative to a reference model with evenly-distributed precipitation). These enhanced thickening of the crust takes place particularly in areas of concentrated precipitation and steep slope, i.e., at the upwind flank of the growing plateau. This effect is particularly enhanced at the corners of the indenter (syntaxes). We hypothesize that this may provide clues for better understanding the mechanisms underlying the intriguing tectonic aneurisms documented in the syntaxes of the Himalayas.

  1. NMR signal for particles diffusing under potentials: From path integrals and numerical methods to a model of diffusion anisotropy

    NASA Astrophysics Data System (ADS)

    Yolcu, Cem; Memiç, Muhammet; Şimşek, Kadir; Westin, Carl-Fredrik; Özarslan, Evren

    2016-05-01

    We study the influence of diffusion on NMR experiments when the molecules undergo random motion under the influence of a force field and place special emphasis on parabolic (Hookean) potentials. To this end, the problem is studied using path integral methods. Explicit relationships are derived for commonly employed gradient waveforms involving pulsed and oscillating gradients. The Bloch-Torrey equation, describing the temporal evolution of magnetization, is modified by incorporating potentials. A general solution to this equation is obtained for the case of parabolic potential by adopting the multiple correlation function (MCF) formalism, which has been used in the past to quantify the effects of restricted diffusion. Both analytical and MCF results were found to be in agreement with random walk simulations. A multidimensional formulation of the problem is introduced that leads to a new characterization of diffusion anisotropy. Unlike the case of traditional methods that employ a diffusion tensor, anisotropy originates from the tensorial force constant, and bulk diffusivity is retained in the formulation. Our findings suggest that some features of the NMR signal that have traditionally been attributed to restricted diffusion are accommodated by the Hookean model. Under certain conditions, the formalism can be envisioned to provide a viable approximation to the mathematically more challenging restricted diffusion problems.

  2. NMR signal for particles diffusing under potentials: From path integrals and numerical methods to a model of diffusion anisotropy.

    PubMed

    Yolcu, Cem; Memiç, Muhammet; Şimşek, Kadir; Westin, Carl-Fredrik; Özarslan, Evren

    2016-05-01

    We study the influence of diffusion on NMR experiments when the molecules undergo random motion under the influence of a force field and place special emphasis on parabolic (Hookean) potentials. To this end, the problem is studied using path integral methods. Explicit relationships are derived for commonly employed gradient waveforms involving pulsed and oscillating gradients. The Bloch-Torrey equation, describing the temporal evolution of magnetization, is modified by incorporating potentials. A general solution to this equation is obtained for the case of parabolic potential by adopting the multiple correlation function (MCF) formalism, which has been used in the past to quantify the effects of restricted diffusion. Both analytical and MCF results were found to be in agreement with random walk simulations. A multidimensional formulation of the problem is introduced that leads to a new characterization of diffusion anisotropy. Unlike the case of traditional methods that employ a diffusion tensor, anisotropy originates from the tensorial force constant, and bulk diffusivity is retained in the formulation. Our findings suggest that some features of the NMR signal that have traditionally been attributed to restricted diffusion are accommodated by the Hookean model. Under certain conditions, the formalism can be envisioned to provide a viable approximation to the mathematically more challenging restricted diffusion problems. PMID:27300946

  3. The astronomical rhythm of Late-Devonian climate change: an integration of cyclostratigraphy and numerical climate modeling

    NASA Astrophysics Data System (ADS)

    De Vleeschouwer, David; Rakocinski, Michal; Racki, Grzegorz; Bond, David; Sobien, Katarzyna; Bounceur, Nabila; Crucifix, Michel; Claeys, Philippe

    2013-04-01

    Rhythmical alternations between limestone and shales or marls characterize the famous Kowala section, Holy Cross Mountains, Poland. Two intervals of this section were studied for evidence of orbital cyclostratigraphy. The oldest interval spans the Frasnian - Famennian (Late Devonian) boundary, deposited under one of the hottest greenhouse climates of the Phanerozoic. The youngest interval encompasses the Devonian - Carboniferous (D-C) boundary, a pivotal moment in Earth's climatic history that saw a transition from greenhouse to icehouse. In both intervals, a clear eccentricity imprint can be distinguished. However, in this abstract, we will focus on the Famennian - Tournaisian (D-C) interval. This interval reveals eccentricity and precession-related lithological variations. Precession-related alternations clearly demonstrate grouping into 100-kyr bundles. The Famennian part of this interval is characterized by several distinctive anoxic black shales, including the Annulata, Dasberg and Hangenberg shales. Our high-resolution cyclostratigraphic framework indicates that those shales were deposited at 2.2 and 2.4 Myr intervals respectively. These durations strongly suggest a link between the long-period (~2.4 Myr) eccentricity cycle and the development of the Annulata, Dasberg and Hangenberg anoxic shales. It is assumed that these black shales form under transgressive conditions, when extremely high eccentricity promoted the collapse of small continental ice-sheets at the most austral latitudes of western Gondwana. Indeed, numerical GCM modeling (HadSM3) of the Late Devonian climate, suggests that rapid melting and ice sheet collapse is triggered during maximal austral summer insolation when eccentricity is high and the perihelion is reached in December. Under this particular astronomical configuration, the global climate is optimal and thus sea-levels are high. Moreover, the global hydrological cycle is enhanced, allowing for more intense rainfall and monsoonal

  4. Graphical arterial blood gas visualization tool supports rapid and accurate data interpretation.

    PubMed

    Doig, Alexa K; Albert, Robert W; Syroid, Noah D; Moon, Shaun; Agutter, Jim A

    2011-04-01

    A visualization tool that integrates numeric information from an arterial blood gas report with novel graphics was designed for the purpose of promoting rapid and accurate interpretation of acid-base data. A study compared data interpretation performance when arterial blood gas results were presented in a traditional numerical list versus the graphical visualization tool. Critical-care nurses (n = 15) and nursing students (n = 15) were significantly more accurate identifying acid-base states and assessing trends in acid-base data when using the graphical visualization tool. Critical-care nurses and nursing students using traditional numerical data had an average accuracy of 69% and 74%, respectively. Using the visualization tool, average accuracy improved to 83% for critical-care nurses and 93% for nursing students. Analysis of response times demonstrated that the visualization tool might help nurses overcome the "speed/accuracy trade-off" during high-stress situations when rapid decisions must be rendered. Perceived mental workload was significantly reduced for nursing students when they used the graphical visualization tool. In this study, the effects of implementing the graphical visualization were greater for nursing students than for critical-care nurses, which may indicate that the experienced nurses needed more training and use of the new technology prior to testing to show similar gains. Results of the objective and subjective evaluations support the integration of this graphical visualization tool into clinical environments that require accurate and timely interpretation of arterial blood gas data.

  5. Important Nearby Galaxies without Accurate Distances

    NASA Astrophysics Data System (ADS)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  6. The Cenozoic fold-and-thrust belt of Eastern Sardinia: Evidences from the integration of field data with numerically balanced geological cross section

    NASA Astrophysics Data System (ADS)

    Arragoni, S.; Maggi, M.; Cianfarra, P.; Salvini, F.

    2016-06-01

    Newly collected structural data in Eastern Sardinia (Italy) integrated with numerical techniques led to the reconstruction of a 2-D admissible and balanced model revealing the presence of a widespread Cenozoic fold-and-thrust belt. The model was achieved with the FORC software, obtaining a 3-D (2-D + time) numerical reconstruction of the continuous evolution of the structure through time. The Mesozoic carbonate units of Eastern Sardinia and their basement present a fold-and-thrust tectonic setting, with a westward direction of tectonic transport (referred to the present-day coordinates). The tectonic style of the upper levels is thin skinned, with flat sectors prevailing over ramps and younger-on-older thrusts. Three regional tectonic units are present, bounded by two regional thrusts. Strike-slip faults overprint the fold-and-thrust belt and developed during the Sardinia-Corsica Block rotation along the strike of the preexisting fault ramps, not affecting the numerical section balancing. This fold-and-thrust belt represents the southward prosecution of the Alpine Corsica collisional chain and the missing link between the Alpine Chain and the Calabria-Peloritani Block. Relative ages relate its evolution to the meso-Alpine event (Eocene-Oligocene times), prior to the opening of the Tyrrhenian Sea (Tortonian). Results fill a gap of information about the geodynamic evolution of the European margin in Central Mediterranean, between Corsica and the Calabria-Peloritani Block, and imply the presence of remnants of this double-verging belt, missing in the Southern Tyrrhenian basin, within the Southern Apennine chain. The used methodology proved effective for constraining balanced cross sections also for areas lacking exposures of the large-scale structures, as the case of Eastern Sardinia.

  7. On the exploitation of Armstrong-Frederik type nonlinear kinematic hardening in the numerical integration and finite-element implementation of pressure dependent plasticity models

    NASA Astrophysics Data System (ADS)

    Metzger, Mario; Seifert, Thomas

    2013-09-01

    In this paper, an unconditionally stable algorithm for the numerical integration and finite-element implementation of a class of pressure dependent plasticity models with nonlinear isotropic and kinematic hardening is presented. Existing algorithms are improved in the sense that the number of equations to be solved iteratively is significantly reduced. This is achieved by exploitation of the structure of Armstrong-Frederik-type kinematic hardening laws. The consistent material tangent is derived analytically and compared to the numerically computed tangent in order to validate the implementation. The performance of the new algorithm is compared to an existing one that does not consider the possibility of reducing the number of unknowns to be iterated. The algorithm is used to implement a time and temperature dependent cast iron plasticity model, which is based on the pressure dependent Gurson model, in the finite-element program ABAQUS. The implementation is applied to compute stresses and strains in a large-scale finite-element model of a three cylinder engine block. This computation proofs the applicability of the algorithm in industrial practice that is of interest in applied sciences.

  8. Accurate Optical Reference Catalogs

    NASA Astrophysics Data System (ADS)

    Zacharias, N.

    2006-08-01

    Current and near future all-sky astrometric catalogs on the ICRF are reviewed with the emphasis on reference star data at optical wavelengths for user applications. The standard error of a Hipparcos Catalogue star position is now about 15 mas per coordinate. For the Tycho-2 data it is typically 20 to 100 mas, depending on magnitude. The USNO CCD Astrograph Catalog (UCAC) observing program was completed in 2004 and reductions toward the final UCAC3 release are in progress. This all-sky reference catalogue will have positional errors of 15 to 70 mas for stars in the 10 to 16 mag range, with a high degree of completeness. Proper motions for the about 60 million UCAC stars will be derived by combining UCAC astrometry with available early epoch data, including yet unpublished scans of the complete set of AGK2, Hamburg Zone astrograph and USNO Black Birch programs. Accurate positional and proper motion data are combined in the Naval Observatory Merged Astrometric Dataset (NOMAD) which includes Hipparcos, Tycho-2, UCAC2, USNO-B1, NPM+SPM plate scan data for astrometry, and is supplemented by multi-band optical photometry as well as 2MASS near infrared photometry. The Milli-Arcsecond Pathfinder Survey (MAPS) mission is currently being planned at USNO. This is a micro-satellite to obtain 1 mas positions, parallaxes, and 1 mas/yr proper motions for all bright stars down to about 15th magnitude. This program will be supplemented by a ground-based program to reach 18th magnitude on the 5 mas level.

  9. A New Multiscale Technique for Time-Accurate Geophysics Simulations

    NASA Astrophysics Data System (ADS)

    Omelchenko, Y. A.; Karimabadi, H.

    2006-12-01

    Large-scale geophysics systems are frequently described by multiscale reactive flow models (e.g., wildfire and climate models, multiphase flows in porous rocks, etc.). Accurate and robust simulations of such systems by traditional time-stepping techniques face a formidable computational challenge. Explicit time integration suffers from global (CFL and accuracy) timestep restrictions due to inhomogeneous convective and diffusion processes, as well as closely coupled physical and chemical reactions. Application of adaptive mesh refinement (AMR) to such systems may not be always sufficient since its success critically depends on a careful choice of domain refinement strategy. On the other hand, implicit and timestep-splitting integrations may result in a considerable loss of accuracy when fast transients in the solution become important. To address this issue, we developed an alternative explicit approach to time-accurate integration of such systems: Discrete-Event Simulation (DES). DES enables asynchronous computation by automatically adjusting the CPU resources in accordance with local timescales. This is done by encapsulating flux- conservative updates of numerical variables in the form of events, whose execution and synchronization is explicitly controlled by imposing accuracy and causality constraints. As a result, at each time step DES self- adaptively updates only a fraction of the global system state, which eliminates unnecessary computation of inactive elements. DES can be naturally combined with various mesh generation techniques. The event-driven paradigm results in robust and fast simulation codes, which can be efficiently parallelized via a new preemptive event processing (PEP) technique. We discuss applications of this novel technology to time-dependent diffusion-advection-reaction and CFD models representative of various geophysics applications.

  10. Assessment of vulnerability in karst aquifers using a quantitative integrated numerical model: catchment characterization and high resolution monitoring - Application to semi-arid regions- Lebanon.

    NASA Astrophysics Data System (ADS)

    Doummar, Joanna; Aoun, Michel; Andari, Fouad

    2016-04-01

    Karst aquifers are highly heterogeneous and characterized by a duality of recharge (concentrated; fast versus diffuse; slow) and a duality of flow which directly influences groundwater flow and spring responses. Given this heterogeneity in flow and infiltration, karst aquifers do not always obey standard hydraulic laws. Therefore the assessment of their vulnerability reveals to be challenging. Studies have shown that vulnerability of aquifers is highly governed by recharge to groundwater. On the other hand specific parameters appear to play a major role in the spatial and temporal distribution of infiltration on a karst system, thus greatly influencing the discharge rates observed at a karst spring, and consequently the vulnerability of a spring. This heterogeneity can only be depicted using an integrated numerical model to quantify recharge spatially and assess the spatial and temporal vulnerability of a catchment for contamination. In the framework of a three-year PEER NSF/USAID funded project, the vulnerability of a karst catchment in Lebanon is assessed quantitatively using a numerical approach. The aim of the project is also to refine actual evapotranspiration rates and spatial recharge distribution in a semi arid environment. For this purpose, a monitoring network was installed since July 2014 on two different pilot karst catchment (drained by Qachqouch Spring and Assal Spring) to collect high resolution data to be used in an integrated catchment numerical model with MIKE SHE, DHI including climate, unsaturated zone, and saturated zone. Catchment characterization essential for the model included geological mapping and karst features (e.g., dolines) survey as they contribute to fast flow. Tracer experiments were performed under different flow conditions (snow melt and low flow) to delineate the catchment area, reveal groundwater velocities and response to snowmelt events. An assessment of spring response after precipitation events allowed the estimation of the

  11. Estimation of Geologic Storage Capacity of Carbon Dioxide in the Bukpyeong Basin, Korea Using Integrated Three-Dimensional Geologic Formation Modeling and Thermo-Hydrological Numerical Modeling

    NASA Astrophysics Data System (ADS)

    Kim, J.; Kihm, J.; Park, S.; SNU CO2 GEO-SEQ TEAM

    2011-12-01

    A conventional method, which was suggested by NETL (2007), has been widely used for estimating the geologic storage capacity of carbon dioxide in sedimentary basins. Because of its simple procedure, it has been straightforwardly applied to even spatially very complicate sedimentary basins. Thus, the results from the conventional method are often not accurate and reliable because it can not consider spatial distributions of fluid conditions and carbon dioxide properties, which are not uniform but variable within sedimentary basins. To overcome this limit of the conventional method, a new method, which can consider such spatially variable distributions of fluid conditions and carbon dioxide properties within sedimentary basins, is suggested and applied in this study. In this new method, a three-dimensional geologic formation model of a target sedimentary basin is first established and discretized into volume elements. The fluid conditions (i.e., pressure, temperature, and salt concentration) within each element are then obtained by performing thermo-hydrological numerical modeling. The carbon dioxide properties (i.e., phase, density, dynamic viscosity, and solubility to groundwater) within each element are then calculated from thermodynamic database under corresponding fluid conditions. Finally, the geologic storage capacity of carbon dioxide with in each element is estimated using the corresponding carbon dioxide properties as well as porosity and element volume, and that within the whole sedimentary basin is determined by summation over all elements. This new method is applied to the Bukpyeong Basin, which is one of the prospective offshore sedimentary basins for geologic storage of carbon dioxide in Korea. A three-dimensional geologic formation model of the Bukpyeong Basin is first established considering the elevation data of the boundaries between the geologic formations obtained from seismic survey and geologic maps at the sea floor surface. This geologic

  12. Accurate deterministic solutions for the classic Boltzmann shock profile

    NASA Astrophysics Data System (ADS)

    Yue, Yubei

    The Boltzmann equation or Boltzmann transport equation is a classical kinetic equation devised by Ludwig Boltzmann in 1872. It is regarded as a fundamental law in rarefied gas dynamics. Rather than using macroscopic quantities such as density, temperature, and pressure to describe the underlying physics, the Boltzmann equation uses a distribution function in phase space to describe the physical system, and all the macroscopic quantities are weighted averages of the distribution function. The information contained in the Boltzmann equation is surprisingly rich, and the Euler and Navier-Stokes equations of fluid dynamics can be derived from it using series expansions. Moreover, the Boltzmann equation can reach regimes far from the capabilities of fluid dynamical equations, such as the realm of rarefied gases---the topic of this thesis. Although the Boltzmann equation is very powerful, it is extremely difficult to solve in most situations. Thus the only hope is to solve it numerically. But soon one finds that even a numerical simulation of the equation is extremely difficult, due to both the complex and high-dimensional integral in the collision operator, and the hyperbolic phase-space advection terms. For this reason, until few years ago most numerical simulations had to rely on Monte Carlo techniques. In this thesis I will present a new and robust numerical scheme to compute direct deterministic solutions of the Boltzmann equation, and I will use it to explore some classical gas-dynamical problems. In particular, I will study in detail one of the most famous and intrinsically nonlinear problems in rarefied gas dynamics, namely the accurate determination of the Boltzmann shock profile for a gas of hard spheres.

  13. A new numerical approach to solve Thomas-Fermi model of an atom using bio-inspired heuristics integrated with sequential quadratic programming.

    PubMed

    Raja, Muhammad Asif Zahoor; Zameer, Aneela; Khan, Aziz Ullah; Wazwaz, Abdul Majid

    2016-01-01

    In this study, a novel bio-inspired computing approach is developed to analyze the dynamics of nonlinear singular Thomas-Fermi equation (TFE) arising in potential and charge density models of an atom by exploiting the strength of finite difference scheme (FDS) for discretization and optimization through genetic algorithms (GAs) hybrid with sequential quadratic programming. The FDS procedures are used to transform the TFE differential equations into a system of nonlinear equations. A fitness function is constructed based on the residual error of constituent equations in the mean square sense and is formulated as the minimization problem. Optimization of parameters for the system is carried out with GAs, used as a tool for viable global search integrated with SQP algorithm for rapid refinement of the results. The design scheme is applied to solve TFE for five different scenarios by taking various step sizes and different input intervals. Comparison of the proposed results with the state of the art numerical and analytical solutions reveals that the worth of our scheme in terms of accuracy and convergence. The reliability and effectiveness of the proposed scheme are validated through consistently getting optimal values of statistical performance indices calculated for a sufficiently large number of independent runs to establish its significance. PMID:27610319

  14. On the mathematical integration of the nervous tissue based on the S-propagator formalism: II. Numerical simulations for molecular-dependent activity.

    PubMed

    Chauvet, Pierre; Chauvet, Gilbert A

    2002-12-01

    In a previous article (G. A. Chauvet, 2002), presenting a theoretical approach for integrating physiological functions in nervous tissue, we showed that a specific hierarchical representation, incorporating the novel concepts of non-symmetry and non-locality, and an appropriate formalism (the S-propagator formalism) could provide a good description of a living system in general, and the nervous system in particular. We now show that, in the framework of this theory, in spite of the complexity inherent to nervous tissue and the great number of elementary mechanisms involved, the numerical resolution of the global non-local system allows us to envisage simulations that would otherwise be impossible to realize. Here, the study is limited to one physiological function, i.e., the spatiotemporal variation of membrane potential in neuronal tissue. We demonstrate that the role of the kinetic constants at the molecular level is in agreement with the observed activity of the neuronal network. The method also reveals the critical role of the maximum density of synapses along the dendritic tree in the behavior of the network. This illustrates the great advantage of the theoretical approach in studying separately any other complementary coupled function without having to modify the computational methods used here. The application of this method to the spatiotemporal variation of synaptic efficacy, which is the basis of the learning and memory function, will be treated in a forthcoming paper. PMID:15011284

  15. A new numerical approach to solve Thomas-Fermi model of an atom using bio-inspired heuristics integrated with sequential quadratic programming.

    PubMed

    Raja, Muhammad Asif Zahoor; Zameer, Aneela; Khan, Aziz Ullah; Wazwaz, Abdul Majid

    2016-01-01

    In this study, a novel bio-inspired computing approach is developed to analyze the dynamics of nonlinear singular Thomas-Fermi equation (TFE) arising in potential and charge density models of an atom by exploiting the strength of finite difference scheme (FDS) for discretization and optimization through genetic algorithms (GAs) hybrid with sequential quadratic programming. The FDS procedures are used to transform the TFE differential equations into a system of nonlinear equations. A fitness function is constructed based on the residual error of constituent equations in the mean square sense and is formulated as the minimization problem. Optimization of parameters for the system is carried out with GAs, used as a tool for viable global search integrated with SQP algorithm for rapid refinement of the results. The design scheme is applied to solve TFE for five different scenarios by taking various step sizes and different input intervals. Comparison of the proposed results with the state of the art numerical and analytical solutions reveals that the worth of our scheme in terms of accuracy and convergence. The reliability and effectiveness of the proposed scheme are validated through consistently getting optimal values of statistical performance indices calculated for a sufficiently large number of independent runs to establish its significance.

  16. Simulating signatures of two-dimensional electronic spectra of the Fenna-Matthews-Olson complex: By using a numerical path integral.

    PubMed

    Liang, Xian-Ting

    2014-07-28

    A framework for simulating electronic spectra from photon-echo experiments is constructed by using a numerical path integral technique. This method is non-Markovian and nonperturbative and, more importantly, is not limited by a fixed form of the spectral density functions of the environment. Next, a two-dimensional (2D) third-order electronic spectrum of a dimer system is simulated. The spectrum is in agreement with the experimental and theoretical results previously reported [for example, M. Khalil, N. Demirdöven, and A. Tokmakoff, Phys. Rev. Lett. 90, 047401 (2003)]. Finally, a 2D third-order electronic spectrum of the Fenna-Matthews-Olson (FMO) complex is simulated by using the Debye, Ohmic, and Adolphs and Renger spectral density functions. It is shown that this method can clearly produce the spectral signatures of the FMO complex by using only the Adolphs and Renger spectral density function. Plots of the evolution of the diagonal and cross-peaks show that they are oscillating with the population time. PMID:25084890

  17. Simulating signatures of two-dimensional electronic spectra of the Fenna-Matthews-Olson complex: By using a numerical path integral

    SciTech Connect

    Liang, Xian-Ting

    2014-07-28

    A framework for simulating electronic spectra from photon-echo experiments is constructed by using a numerical path integral technique. This method is non-Markovian and nonperturbative and, more importantly, is not limited by a fixed form of the spectral density functions of the environment. Next, a two-dimensional (2D) third-order electronic spectrum of a dimer system is simulated. The spectrum is in agreement with the experimental and theoretical results previously reported [for example, M. Khalil, N. Demirdöven, and A. Tokmakoff, Phys. Rev. Lett. 90, 047401 (2003)]. Finally, a 2D third-order electronic spectrum of the Fenna-Matthews-Olson (FMO) complex is simulated by using the Debye, Ohmic, and Adolphs and Renger spectral density functions. It is shown that this method can clearly produce the spectral signatures of the FMO complex by using only the Adolphs and Renger spectral density function. Plots of the evolution of the diagonal and cross-peaks show that they are oscillating with the population time.

  18. Magnitude knowledge: the common core of numerical development.

    PubMed

    Siegler, Robert S

    2016-05-01

    The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: (1) representing increasingly precisely the magnitudes of non-symbolic numbers, (2) connecting small symbolic numbers to their non-symbolic referents, (3) extending understanding from smaller to larger whole numbers, and (4) accurately representing the magnitudes of rational numbers. The present review identifies substantial commonalities, as well as differences, in these four aspects of numerical development. With both whole and rational numbers, numerical magnitude knowledge is concurrently correlated with, longitudinally predictive of, and causally related to multiple aspects of mathematical understanding, including arithmetic and overall math achievement. Moreover, interventions focused on increasing numerical magnitude knowledge often generalize to other aspects of mathematics. The cognitive processes of association and analogy seem to play especially large roles in this development. Thus, acquisition of numerical magnitude knowledge can be seen as the common core of numerical development.

  19. Magnitude knowledge: the common core of numerical development.

    PubMed

    Siegler, Robert S

    2016-05-01

    The integrated theory of numerical development posits that a central theme of numerical development from infancy to adulthood is progressive broadening of the types and ranges of numbers whose magnitudes are accurately represented. The process includes four overlapping trends: (1) representing increasingly precisely the magnitudes of non-symbolic numbers, (2) connecting small symbolic numbers to their non-symbolic referents, (3) extending understanding from smaller to larger whole numbers, and (4) accurately representing the magnitudes of rational numbers. The present review identifies substantial commonalities, as well as differences, in these four aspects of numerical development. With both whole and rational numbers, numerical magnitude knowledge is concurrently correlated with, longitudinally predictive of, and causally related to multiple aspects of mathematical understanding, including arithmetic and overall math achievement. Moreover, interventions focused on increasing numerical magnitude knowledge often generalize to other aspects of mathematics. The cognitive processes of association and analogy seem to play especially large roles in this development. Thus, acquisition of numerical magnitude knowledge can be seen as the common core of numerical development. PMID:27074723

  20. Root Water Uptake and Tracer Transport in a Lupin Root System: Integration of Magnetic Resonance Images and the Numerical Model RSWMS

    NASA Astrophysics Data System (ADS)

    Pohlmeier, Andreas; Vanderborght, Jan; Haber-Pohlmeier, Sabina; Wienke, Sandra; Vereecken, Harry; Javaux, Mathieu

    2010-05-01

    Combination of experimental studies with detailed deterministic models help understand root water uptake processes. Recently, Javaux et al. developed the RSWMS model by integration of Doussańs root model into the well established SWMS code[1], which simulates water and solute transport in unsaturated soil [2, 3]. In order to confront RSWMS modeling results to experimental data, we used Magnetic Resonance Imaging (MRI) technique to monitor root water uptake in situ. Non-invasive 3-D imaging of root system architecture, water content distributions and tracer transport by MR were performed and compared with numerical model calculations. Two MRI experiments were performed and modeled: i) water uptake during drought stress and ii) transport of a locally injected tracer (Gd-DTPA) to the soil-root system driven by root water uptake. Firstly, the high resolution MRI image (0.23x0.23x0.5mm) of the root system was transferred into a continuous root system skeleton by a combination of thresholding, region-growing filtering and final manual 3D redrawing of the root strands. Secondly, the two experimental scenarios were simulated by RSWMS with a resolution of about 3mm. For scenario i) the numerical simulations could reproduce the general trend that is the strong water depletion from the top layer of the soil. However, the creation of depletion zones in the vicinity of the roots could not be simulated, due to a poor initial evaluation of the soil hydraulic properties, which equilibrates instantaneously larger differences in water content. The determination of unsaturated conductivities at low water content was needed to improve the model calculations. For scenario ii) simulations confirmed the solute transport towards the roots by advection. 1. Simunek, J., T. Vogel, and M.T. van Genuchten, The SWMS_2D Code for Simulating Water Flow and Solute Transport in Two-Dimensional Variably Saturated Media. Version 1.21. 1994, U.S. Salinity Laboratory, USDA, ARS: Riverside, California

  1. Numerical simulation of the flow field around a complete aircraft

    NASA Technical Reports Server (NTRS)

    Shang, J. S.; Scherr, S. J.

    1986-01-01

    The present effort represents a first attempt of numerical simulation of the flow field around a complete aircraft-like, lifting configuration utilizing the Reynolds averaged Navier-Stokes equations. The numerical solution generated for the experimental aircraft concept X24C-10D at a Mach number of 5.95 not only exhibited accurate prediction of detailed flow properties but also of the integrated aerodynamic coefficients. In addition, the present analysis demonstrated that a page structure of data collected into cyclic blocks is an efficient and viable means for processing the Navier-Stokes equations on the CRAY XMP-22 computer with external memory device.

  2. Investigation of Geomorphic and Seismic Effects on the 1959 Madison Canyon, Montana, Landslide Using an Integrated Field, Engineering Geomorphology Mapping, and Numerical Modelling Approach

    NASA Astrophysics Data System (ADS)

    Wolter, A.; Gischig, V.; Stead, D.; Clague, J. J.

    2016-06-01

    We present an integrated approach to investigate the seismically triggered Madison Canyon landslide (volume = 20 Mm3), which killed 26 people in Montana, USA, in 1959. We created engineering geomorphological maps and conducted field surveys, long-range terrestrial digital photogrammetry, and preliminary 2D numerical modelling with the objective of determining the conditioning factors, mechanisms, movement behaviour, and evolution of the failure. We emphasise the importance of both endogenic (i.e. seismic) and exogenic (i.e. geomorphic) processes in conditioning the slope for failure and hypothesise a sequence of events based on the morphology of the deposit and seismic modelling. A section of the slope was slowly deforming before a magnitude-7.5 earthquake with an epicentre 30 km away triggered the catastrophic failure in August 1959. The failed rock mass rapidly fragmented as it descended the slope towards Madison River. Part of the mass remained relatively intact as it moved on a layer of pulverised debris. The main slide was followed by several debris slides, slumps, and rockfalls. The slide debris was extensively modified soon after the disaster by the US Army Corps of Engineers to provide a stable outflow channel from newly formed Earthquake Lake. Our modelling and observations show that the landslide occurred as a result of long-term damage of the slope induced by fluvial undercutting, erosion, weathering, and past seismicity, and due to the short-term triggering effect of the 1959 earthquake. Static models suggest the slope was stable prior to the 1959 earthquake; failure would have required a significant reduction in material strength. Preliminary dynamic models indicate that repeated seismic loading was a critical process for catastrophic failure. Although the ridge geometry and existing tension cracks in the initiation zone amplified ground motions, the most important factors in initiating failure were pre-existing discontinuities and seismically induced

  3. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  4. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  5. Accurate determination of characteristic relative permeability curves

    NASA Astrophysics Data System (ADS)

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  6. A high order accurate difference scheme for complex flow fields

    SciTech Connect

    Dexun Fu; Yanwen Ma

    1997-06-01

    A high order accurate finite difference method for direct numerical simulation of coherent structure in the mixing layers is presented. The reason for oscillation production in numerical solutions is analyzed. It is caused by a nonuniform group velocity of wavepackets. A method of group velocity control for the improvement of the shock resolution is presented. In numerical simulation the fifth-order accurate upwind compact difference relation is used to approximate the derivatives in the convection terms of the compressible N-S equations, a sixth-order accurate symmetric compact difference relation is used to approximate the viscous terms, and a three-stage R-K method is used to advance in time. In order to improve the shock resolution the scheme is reconstructed with the method of diffusion analogy which is used to control the group velocity of wavepackets. 18 refs., 12 figs., 1 tab.

  7. Universality: Accurate Checks in Dyson's Hierarchical Model

    NASA Astrophysics Data System (ADS)

    Godina, J. J.; Meurice, Y.; Oktay, M. B.

    2003-06-01

    In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.

  8. Benchmark values for molecular two-electron integrals arising from the Dirac equation

    NASA Astrophysics Data System (ADS)

    Baǧcı, A.; Hoggan, P. E.

    2015-02-01

    The two-center two-electron Coulomb and hybrid integrals arising in relativistic and nonrelativistic ab initio calculations on molecules are evaluated. Compact, arbitrarily accurate expressions are obtained. They are expressed through molecular auxiliary functions and evaluated with the numerical Global-adaptive method for arbitrary values of parameters in the noninteger Slater-type orbitals. Highly accurate benchmark values are presented for these integrals. The convergence properties of new molecular auxiliary functions are investigated. The comparison for two-center two-electron integrals is made with results obtained from single center expansions by translation of the wave function to a single center with integer principal quantum numbers and results obtained from the Cuba numerical integration algorithm, respectively. The procedures discussed in this work are capable of yielding highly accurate two-center two-electron integrals for all ranges of orbital parameters.

  9. Numerical analysis of the asymptotic two-point boundary value solution for N-body trajectories.

    NASA Technical Reports Server (NTRS)

    Lancaster, J. E.; Allemann, R. A.

    1972-01-01

    Previously published asymptotic solutions for lunar and interplanetary trajectories have been modified and combined to formulate a general analytical boundary value solution applicable to a broad class of trajectory problems. In addition, the earlier first-order solutions have been extended to second-order to determine if improved accuracy is possible. Comparisons between the asymptotic solution and numerical integration for several lunar and interplanetary trajectories show that the asymptotic solution is generally quite accurate. Also, since no iterations are required, a solution to the boundary value problem is obtained in a fraction of the time required for numerically integrated solutions.

  10. GO2OGS 1.0: a versatile workflow to integrate complex geological information with fault data into numerical simulation models

    NASA Astrophysics Data System (ADS)

    Fischer, T.; Naumov, D.; Sattler, S.; Kolditz, O.; Walther, M.

    2015-11-01

    We offer a versatile workflow to convert geological models built with the ParadigmTM GOCAD© (Geological Object Computer Aided Design) software into the open-source VTU (Visualization Toolkit unstructured grid) format for usage in numerical simulation models. Tackling relevant scientific questions or engineering tasks often involves multidisciplinary approaches. Conversion workflows are needed as a way of communication between the diverse tools of the various disciplines. Our approach offers an open-source, platform-independent, robust, and comprehensible method that is potentially useful for a multitude of environmental studies. With two application examples in the Thuringian Syncline, we show how a heterogeneous geological GOCAD model including multiple layers and faults can be used for numerical groundwater flow modeling, in our case employing the OpenGeoSys open-source numerical toolbox for groundwater flow simulations. The presented workflow offers the chance to incorporate increasingly detailed data, utilizing the growing availability of computational power to simulate numerical models.

  11. A RANS/DES Numerical Procedure for Axisymmetric Flows with and without Strong Rotation

    SciTech Connect

    Andrade, Andrew Jacob

    2007-01-01

    A RANS/DES numerical procedure with an extended Lax-Wendroff control-volume scheme and turbulence model is described for the accurate simulation of internal/external axisymmetric flow with and without strong rotation. This new procedure is an extension, from Cartesian to cylindrical coordinates, of (1) a second order accurate multi-grid, control-volume integration scheme, and (2) a k-ω turbulence model. This paper outlines both the axisymmetric corrections to the mentioned numerical schemes and the developments of techniques pertaining to numerical dissipation, multi-block connectivity, parallelization, etc. Furthermore, analytical and experimental case studies are presented to demonstrate accuracy and computational efficiency. Notes are also made toward numerical stability of highly rotational flows.

  12. Explicit numerical solutions of a microbial survival model under nonisothermal conditions.

    PubMed

    Zhu, Si; Chen, Guibing

    2016-03-01

    Differential equations used to describe the original and modified Geeraerd models were, respectively, simplified into an explicit equation in which the integration of the specific inactivation rate with respect to time was numerically approximated using the Simpson's rule. The explicit numerical solutions were then used to simulate microbial survival curves and fit nonisothermal survival data for identifying model parameters in Microsoft Excel. The results showed that the explicit numerical solutions provided an easy way to accurately simulate microbial survival and estimate model parameters from nonisothermal survival data using the Geeraerd models.

  13. Numerical evaluation of the incomplete airy functions and their application to high frequency scattering and diffraction

    NASA Technical Reports Server (NTRS)

    Constantinides, E. D.; Marhefka, R. J.

    1992-01-01

    The incomplete Airy integrals serve as canonical functions for the uniform ray optical solutions to several high frequency scattering and diffraction problems that involve a class of integrals characterized by two stationary points that are arbitrarily close to one another or to an integration endpoint. Integrals of such analytical properties describe transition region phenomena associated with composite shadow boundaries. An efficient and accurate method for computing the incomplete Airy functions would make the solutions to such problems useful for engineering purposes. Here, a convergent series solution form for the incomplete Airy functions is derived. Asymptotic expansions involving several terms were also developed and serve as large argument approximations. The combination of the series solution form with the asymptotic formulae provides for an efficient and accurate computation of the incomplete Airy functions. Validation of accuracy is accomplished using direct numerical integration data.

  14. A new approach to compute accurate velocity of meteors

    NASA Astrophysics Data System (ADS)

    Egal, Auriane; Gural, Peter; Vaubaillon, Jeremie; Colas, Francois; Thuillot, William

    2016-10-01

    The CABERNET project was designed to push the limits of meteoroid orbit measurements by improving the determination of the meteors' velocities. Indeed, despite of the development of the cameras networks dedicated to the observation of meteors, there is still an important discrepancy between the measured orbits of meteoroids computed and the theoretical results. The gap between the observed and theoretic semi-major axis of the orbits is especially significant; an accurate determination of the orbits of meteoroids therefore largely depends on the computation of the pre-atmospheric velocities. It is then imperative to dig out how to increase the precision of the measurements of the velocity.In this work, we perform an analysis of different methods currently used to compute the velocities and trajectories of the meteors. They are based on the intersecting planes method developed by Ceplecha (1987), the least squares method of Borovicka (1990), and the multi-parameter fitting (MPF) method published by Gural (2012).In order to objectively compare the performances of these techniques, we have simulated realistic meteors ('fakeors') reproducing the different error measurements of many cameras networks. Some fakeors are built following the propagation models studied by Gural (2012), and others created by numerical integrations using the Borovicka et al. 2007 model. Different optimization techniques have also been investigated in order to pick the most suitable one to solve the MPF, and the influence of the geometry of the trajectory on the result is also presented.We will present here the results of an improved implementation of the multi-parameter fitting that allow an accurate orbit computation of meteors with CABERNET. The comparison of different velocities computation seems to show that if the MPF is by far the best method to solve the trajectory and the velocity of a meteor, the ill-conditioning of the costs functions used can lead to large estimate errors for noisy

  15. Progress in fast, accurate multi-scale climate simulations

    SciTech Connect

    Collins, W. D.; Johansen, H.; Evans, K. J.; Woodward, C. S.; Caldwell, P. M.

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.

  16. Progress in fast, accurate multi-scale climate simulations

    DOE PAGES

    Collins, W. D.; Johansen, H.; Evans, K. J.; Woodward, C. S.; Caldwell, P. M.

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  17. Progress in Fast, Accurate Multi-scale Climate Simulations

    SciTech Connect

    Collins, William D; Johansen, Hans; Evans, Katherine J; Woodward, Carol S.; Caldwell, Peter

    2015-01-01

    We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.

  18. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2003-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  19. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2002-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  20. Profitable capitation requires accurate costing.

    PubMed

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages. PMID:8788799

  1. Accurate momentum transfer cross section for the attractive Yukawa potential

    SciTech Connect

    Khrapak, S. A.

    2014-04-15

    Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within ±2% in the regime relevant for ion-particle collisions in complex (dusty) plasmas.

  2. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  3. Numerical nebulae

    NASA Astrophysics Data System (ADS)

    Rijkhorst, Erik-Jan

    2005-12-01

    The late stages of evolution of stars like our Sun are dominated by several episodes of violent mass loss. Space based observations of the resulting objects, known as Planetary Nebulae, show a bewildering array of highly symmetric shapes. The interplay between gasdynamics and radiative processes determines the morphological outcome of these objects, and numerical models for astrophysical gasdynamics have to incorporate these effects. This thesis presents new numerical techniques for carrying out high-resolution three-dimensional radiation hydrodynamical simulations. Such calculations require parallelization of computer codes, and the use of state-of-the-art supercomputer technology. Numerical models in the context of the shaping of Planetary Nebulae are presented, providing insight into their origin and fate.

  4. Accurate documentation and wound measurement.

    PubMed

    Hampton, Sylvie

    This article, part 4 in a series on wound management, addresses the sometimes routine yet crucial task of documentation. Clear and accurate records of a wound enable its progress to be determined so the appropriate treatment can be applied. Thorough records mean any practitioner picking up a patient's notes will know when the wound was last checked, how it looked and what dressing and/or treatment was applied, ensuring continuity of care. Documenting every assessment also has legal implications, demonstrating due consideration and care of the patient and the rationale for any treatment carried out. Part 5 in the series discusses wound dressing characteristics and selection.

  5. Integration of natural data within a numerical model of ablative subduction: A possible interpretation for the Alpine dynamics of the Austroalpine crust.

    NASA Astrophysics Data System (ADS)

    Roda, M.; Spalla, M. I.; Marotta, A. M.

    2012-04-01

    A numerical modelling approach is used to validate the physical and geological reliability of the ablative subduction mechanism during Alpine convergence in order to interpret the tectonic and metamorphic evolution of an inner portion of the Alpine belt: the Austroalpine Domain. The model predictions and the natural data for the Austroalpine of the Western Alps agree very well in terms of P-T peak conditions, relative chronology of peak and exhumation events, P-T-t paths, thermal gradients and the tectonic evolution of the continental rocks. These findings suggest that a pre-collisional evolution of this domain, with the burial of the continental rocks (induced by ablative subduction of the overriding Adria plate) and their exhumation (driven by an upwelling flow generated in a hydrated mantle wedge) could be a valid mechanism that reproduces the actual tectono-metamorphic configuration of this part of the Alps. There is less agreement between the model predictions and the natural data for the Austroalpine of the Central-Eastern Alps. Based on the natural data available in the literature, a critical discussion of the other proposed mechanisms is presented, and additional geological factors that should be considered within the numerical model are suggested to improve the fitting to the numerical results; these factors include variations in the continental and/or oceanic thickness, variation of the subduction rate and/or slab dip, the initial thermal state of the passive margin, the occurrence of continental collision and an oblique convergence.

  6. Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2009-01-01

    Recent advances in numerical relativity have fueled an explosion of progress in understanding the predictions of Einstein's theory of gravity, General Relativity, for the strong field dynamics, the gravitational radiation wave forms, and consequently the state of the remnant produced from the merger of compact binary objects. I will review recent results from the field, focusing on mergers of two black holes.

  7. Evaluating Definite Integrals on a Computer Theory and Practice. Applications of Numerical Analysis. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Unit 432.

    ERIC Educational Resources Information Center

    Wagon, Stanley

    This document explores two methods of obtaining numbers that are approximations of certain definite integrals. The methods covered are the Trapezoidal Rule and Romberg's method. Since the formulas used involve considerable calculation, a computer is normally used. Some of the problems and pitfalls of computer implementation, such as roundoff…

  8. IIR approximations to the fractional differentiator/integrator using Chebyshev polynomials theory.

    PubMed

    Romero, M; de Madrid, A P; Mañoso, C; Vinagre, B M

    2013-07-01

    This paper deals with the use of Chebyshev polynomials theory to achieve accurate discrete-time approximations to the fractional-order differentiator/integrator in terms of IIR filters. These filters are obtained using the Chebyshev-Padé and the Rational Chebyshev approximations, two highly accurate numerical methods that can be computed with ease using available software. They are compared against other highly accurate approximations proposed in the literature. It is also shown how the frequency response of the fractional-order integrator approximations can be easily improved at low frequencies. PMID:23507506

  9. Toward Accurate and Quantitative Comparative Metagenomics.

    PubMed

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  10. Toward Accurate and Quantitative Comparative Metagenomics

    PubMed Central

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  11. How Accurately can we Calculate Thermal Systems?

    SciTech Connect

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  12. SPLASH: Accurate OH maser positions

    NASA Astrophysics Data System (ADS)

    Walsh, Andrew; Gomez, Jose F.; Jones, Paul; Cunningham, Maria; Green, James; Dawson, Joanne; Ellingsen, Simon; Breen, Shari; Imai, Hiroshi; Lowe, Vicki; Jones, Courtney

    2013-10-01

    The hydroxyl (OH) 18 cm lines are powerful and versatile probes of diffuse molecular gas, that may trace a largely unstudied component of the Galactic ISM. SPLASH (the Southern Parkes Large Area Survey in Hydroxyl) is a large, unbiased and fully-sampled survey of OH emission, absorption and masers in the Galactic Plane that will achieve sensitivities an order of magnitude better than previous work. In this proposal, we request ATCA time to follow up OH maser candidates. This will give us accurate (~10") positions of the masers, which can be compared to other maser positions from HOPS, MMB and MALT-45 and will provide full polarisation measurements towards a sample of OH masers that have not been observed in MAGMO.

  13. Accurate thickness measurement of graphene

    NASA Astrophysics Data System (ADS)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  14. Accurate thickness measurement of graphene.

    PubMed

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-03-29

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  15. Numerical modeling and environmental isotope methods in integrated mine-water management: a case study from the Witwatersrand basin, South Africa

    NASA Astrophysics Data System (ADS)

    Mengistu, Haile; Tessema, Abera; Abiye, Tamiru; Demlie, Molla; Lin, Haili

    2015-05-01

    Improved groundwater flow conceptualization was achieved using environmental stable isotope (ESI) and hydrochemical information to complete a numerical groundwater flow model with reasonable certainty. The study aimed to assess the source of excess water at a pumping shaft located near the town of Stilfontein, North West Province, South Africa. The results indicate that the water intercepted at Margaret Shaft comes largely from seepage of a nearby mine tailings dam (Dam 5) and from the upper dolomite aquifer. If pumping at the shaft continues at the current rate and Dam 5 is decommissioned, neighbouring shallow farm boreholes would dry up within approximately 10 years. Stable isotope data of shaft water indicate that up to 50 % of the pumped water from Margaret Shaft is recirculated, mainly from Dam 5. The results are supplemented by tritium data, demonstrating that recent recharge is taking place through open fractures as well as man-made underground workings, whereas hydrochemical data of fissure water samples from roughly 950 m below ground level exhibit mine-water signatures. Pumping at the shaft, which captures shallow groundwater as well as seepage from surface dams, is a highly recommended option for preventing flooding of downstream mines. The results of this research highlight the importance of additional methods (ESI and hydrochemical analyses) to improve flow conceptualization and numerical modelling.

  16. Accurately measuring dynamic coefficient of friction in ultraform finishing

    NASA Astrophysics Data System (ADS)

    Briggs, Dennis; Echaves, Samantha; Pidgeon, Brendan; Travis, Nathan; Ellis, Jonathan D.

    2013-09-01

    UltraForm Finishing (UFF) is a deterministic sub-aperture computer numerically controlled grinding and polishing platform designed by OptiPro Systems. UFF is used to grind and polish a variety of optics from simple spherical to fully freeform, and numerous materials from glasses to optical ceramics. The UFF system consists of an abrasive belt around a compliant wheel that rotates and contacts the part to remove material. This work aims to accurately measure the dynamic coefficient of friction (μ), how it changes as a function of belt wear, and how this ultimately affects material removal rates. The coefficient of friction has been examined in terms of contact mechanics and Preston's equation to determine accurate material removal rates. By accurately predicting changes in μ, polishing iterations can be more accurately predicted, reducing the total number of iterations required to meet specifications. We have established an experimental apparatus that can accurately measure μ by measuring triaxial forces during translating loading conditions or while manufacturing the removal spots used to calculate material removal rates. Using this system, we will demonstrate μ measurements for UFF belts during different states of their lifecycle and assess the material removal function from spot diagrams as a function of wear. Ultimately, we will use this system for qualifying belt-wheel-material combinations to develop a spot-morphing model to better predict instantaneous material removal functions.

  17. Accurate and rapid micromixer for integrated microfluidic devices

    SciTech Connect

    Van Dam, R. Michael; Liu, Kan; Shen, Kwang -Fu Clifton; Tseng, Hsian -Rong

    2015-09-22

    The invention may provide a microfluidic mixer having a droplet generator and a droplet mixer in selective fluid connection with the droplet generator. The droplet generator comprises first and second fluid chambers that are structured to be filled with respective first and second fluids that can each be held in isolation for a selectable period of time. The first and second fluid chambers are further structured to be reconfigured into a single combined chamber to allow the first and second fluids in the first and second fluid chambers to come into fluid contact with each other in the combined chamber for a selectable period of time prior to being brought into the droplet mixer.

  18. On the reliability of gravitational N-body integrations

    NASA Technical Reports Server (NTRS)

    Quinlan, Gerald D.; Tremaine, Scott

    1992-01-01

    In a self-gravitating system of point particles such as a spherical star cluster, small disturbances to an orbit grow exponentially on a time-scale comparable with the crossing time. The results of N-body integrations are therefore extremely sensitive to numerical errors: in practice it is almost impossible to follow orbits of individual particles accurately for more than a few crossing times. We demonstrate that numerical orbits in the gravitational N-body problem are often shadowed by true orbits for many crossing times. This result enhances our confidence in the use of N-body integrations to study the evolution of stellar systems.

  19. Tool for the Integrated Dynamic Numerical Propulsion System Simulation (NPSS)/Turbine Engine Closed-Loop Transient Analysis (TTECTrA) User's Guide

    NASA Technical Reports Server (NTRS)

    Chin, Jeffrey C.; Csank, Jeffrey T.

    2016-01-01

    The Tool for Turbine Engine Closed-Loop Transient Analysis (TTECTrA ver2) is a control design tool thatenables preliminary estimation of transient performance for models without requiring a full nonlinear controller to bedesigned. The program is compatible with subsonic engine models implemented in the MATLAB/Simulink (TheMathworks, Inc.) environment and Numerical Propulsion System Simulation (NPSS) framework. At a specified flightcondition, TTECTrA will design a closed-loop controller meeting user-defined requirements in a semi or fully automatedfashion. Multiple specifications may be provided, in which case TTECTrA will design one controller for each, producing acollection of controllers in a single run. Each resulting controller contains a setpoint map, a schedule of setpointcontroller gains, and limiters; all contributing to transient characteristics. The goal of the program is to providesteady-state engine designers with more immediate feedback on the transient engine performance earlier in the design cycle.

  20. An integrated strategy for rapid and accurate determination of free and cell-bound microcystins and related peptides in natural blooms by liquid chromatography-electrospray-high resolution mass spectrometry and matrix-assisted laser desorption/ionization time-of-flight/time-of-flight mass spectrometry using both positive and negative ionization modes.

    PubMed

    Flores, Cintia; Caixach, Josep

    2015-08-14

    An integrated high resolution mass spectrometry (HRMS) strategy has been developed for rapid and accurate determination of free and cell-bound microcystins (MCs) and related peptides in water blooms. The natural samples (water and algae) were filtered for independent analysis of aqueous and sestonic fractions. These fractions were analyzed by MALDI-TOF/TOF-MS and ESI-Orbitrap-HCD-MS. MALDI, ESI and the study of fragmentation sequences have been provided crucial structural information. The potential of combined positive and negative ionization modes, full scan and fragmentation acquisition modes (TOF/TOF and HCD) by HRMS and high resolution and accurate mass was investigated in order to allow unequivocal determination of MCs. Besides, a reliable quantitation has been possible by HRMS. This composition helped to decrease the probability of false positives and negatives, as alternative to commonly used LC-ESI-MS/MS methods. The analysis was non-target, therefore covered the possibility to analyze all MC analogs concurrently without any pre-selection of target MC. Furthermore, archived data was subjected to retrospective "post-targeted" analysis and a screening of other potential toxins and related peptides as anabaenopeptins in the samples was done. Finally, the MS protocol and identification tools suggested were applied to the analysis of characteristic water blooms from Spanish reservoirs. PMID:26141269

  1. An integrated strategy for rapid and accurate determination of free and cell-bound microcystins and related peptides in natural blooms by liquid chromatography-electrospray-high resolution mass spectrometry and matrix-assisted laser desorption/ionization time-of-flight/time-of-flight mass spectrometry using both positive and negative ionization modes.

    PubMed

    Flores, Cintia; Caixach, Josep

    2015-08-14

    An integrated high resolution mass spectrometry (HRMS) strategy has been developed for rapid and accurate determination of free and cell-bound microcystins (MCs) and related peptides in water blooms. The natural samples (water and algae) were filtered for independent analysis of aqueous and sestonic fractions. These fractions were analyzed by MALDI-TOF/TOF-MS and ESI-Orbitrap-HCD-MS. MALDI, ESI and the study of fragmentation sequences have been provided crucial structural information. The potential of combined positive and negative ionization modes, full scan and fragmentation acquisition modes (TOF/TOF and HCD) by HRMS and high resolution and accurate mass was investigated in order to allow unequivocal determination of MCs. Besides, a reliable quantitation has been possible by HRMS. This composition helped to decrease the probability of false positives and negatives, as alternative to commonly used LC-ESI-MS/MS methods. The analysis was non-target, therefore covered the possibility to analyze all MC analogs concurrently without any pre-selection of target MC. Furthermore, archived data was subjected to retrospective "post-targeted" analysis and a screening of other potential toxins and related peptides as anabaenopeptins in the samples was done. Finally, the MS protocol and identification tools suggested were applied to the analysis of characteristic water blooms from Spanish reservoirs.

  2. Numerical Analysis of Integral Characteristics for the Condenser Setups of Independent Power-Supply Sources with the Closed-Looped Thermodynamic Cycle

    NASA Astrophysics Data System (ADS)

    Vysokomorny, Vladimir S.; Vysokomornaya, Vladimir S.

    2016-02-01

    The mathematical model of heat and mass transfer processes with phase transition is developed. It allows analyzing of integral characteristics for the condenser setup of independent power-supply plant with the organic Rankine cycle. Different kinds of organic liquids can be used as a coolant and working substance. The temperatures of the working liquid at the condenser outlet under different values of outside air temperature are determined. The comparative analysis of the utilization efficiency of different cooling systems and organic coolants is carried out.

  3. Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis

    PubMed Central

    Abbasi, Mahdi

    2014-01-01

    Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N2log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR. PMID:24696808

  4. Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis.

    PubMed

    Abbasi, Mahdi

    2014-01-01

    Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N (2)log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR.

  5. An Evaluation of the "Treatment Integrity Planning Protocol" and Two Schedules of Treatment Integrity Self-Report: Impact on Implementation and Report Accuracy

    ERIC Educational Resources Information Center

    Hagermoser Sanetti, Lisa M.; Kratochwill, Thomas R.

    2011-01-01

    The evidence-based practice movement has focused on identifying, disseminating, and promoting the adoption of evidence-based interventions. Despite advances in this movement, numerous barriers, such as the lack of treatment integrity assessment methods, remain as challenges in implementation. Accurate teacher self-report could be an efficient…

  6. Geometrically invariant and high capacity image watermarking scheme using accurate radial transform

    NASA Astrophysics Data System (ADS)

    Singh, Chandan; Ranade, Sukhjeet K.

    2013-12-01

    Angular radial transform (ART) is a region based descriptor and possesses many attractive features such as rotation invariance, low computational complexity and resilience to noise which make them more suitable for invariant image watermarking than that of many transform domain based image watermarking techniques. In this paper, we introduce ART for fast and geometrically invariant image watermarking scheme with high embedding capacity. We also develop an accurate and fast framework for the computation of ART coefficients based on Gaussian quadrature numerical integration, 8-way symmetry/anti-symmetry properties and recursive relations for the calculation of sinusoidal kernel functions. ART coefficients so computed are then used for embedding the binary watermark using dither modulation. Experimental studies reveal that the proposed watermarking scheme not only provides better robustness against geometric transformations and other signal processing distortions, but also has superior advantages over the existing ones in terms of embedding capacity, speed and visual imperceptibility.

  7. An accurate representation of the motion of Pluto

    NASA Astrophysics Data System (ADS)

    Goffin, E.; Meeus, J.; Steyaert, C.

    1986-02-01

    Three series of periodic terms are presented which make it possible to calculate the heliocentric coordinates of Pluto (longitude, latitude, radius vector) during a time interval of more than two centuries. The terms and coefficients have been derived indirectly by least-square approximation of a numerical integration of the motion of Pluto. For the years 1885 to 2099, the maximum error is 0.5 arcsec in longitude, 0.1 arcsec in latitude, and 0.00002 AU in radius vector as compared to the numerical integration.

  8. Accurate SHAPE-directed RNA structure determination

    PubMed Central

    Deigan, Katherine E.; Li, Tian W.; Mathews, David H.; Weeks, Kevin M.

    2009-01-01

    Almost all RNAs can fold to form extensive base-paired secondary structures. Many of these structures then modulate numerous fundamental elements of gene expression. Deducing these structure–function relationships requires that it be possible to predict RNA secondary structures accurately. However, RNA secondary structure prediction for large RNAs, such that a single predicted structure for a single sequence reliably represents the correct structure, has remained an unsolved problem. Here, we demonstrate that quantitative, nucleotide-resolution information from a SHAPE experiment can be interpreted as a pseudo-free energy change term and used to determine RNA secondary structure with high accuracy. Free energy minimization, by using SHAPE pseudo-free energies, in conjunction with nearest neighbor parameters, predicts the secondary structure of deproteinized Escherichia coli 16S rRNA (>1,300 nt) and a set of smaller RNAs (75–155 nt) with accuracies of up to 96–100%, which are comparable to the best accuracies achievable by comparative sequence analysis. PMID:19109441

  9. Accurate adiabatic correction in the hydrogen molecule

    NASA Astrophysics Data System (ADS)

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-01

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  10. Fast and Provably Accurate Bilateral Filtering.

    PubMed

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  11. Accurate adiabatic correction in the hydrogen molecule

    SciTech Connect

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  12. Accurate adiabatic correction in the hydrogen molecule.

    PubMed

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10(-12) at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10(-7) cm(-1), which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels. PMID:25494728

  13. Numerical investigation of tail buffet on F-18 aircraft

    NASA Technical Reports Server (NTRS)

    Rizk, Yehia M.; Guruswamy, Guru P.; Gee, Ken

    1992-01-01

    Numerical investigation of vortex induced tail buffet is conducted on the F-18 aircraft at high angles of attack. The Reynolds-averaged Navier-Stokes equations are integrated using a time-accurate, implicit procedure. A generalized overset zonal grid scheme is used to decompose the computational space around the complete aircraft with faired-over inlet. A weak coupling between the aerodynamics and structures is assumed to compute the structural oscillation of the flexible vertical tail. Time-accurate computations of the turbulent flow around the F-18 aircraft at 30 degrees angle of attack show the surface and off-surface flowfield details, including the unsteadiness created by the vortex burst and its interaction with the vertical twin tail which causes the tail buffet. The effect of installing a LEX fence on modifying the vortex structure upstream of the tail is also examined.

  14. Integration of numerical models and geoinformatic techniques in the delimitation of a protection zone for the MGB 319 complex multi-aquifer system in southwest Poland

    NASA Astrophysics Data System (ADS)

    Gurwin, Jacek

    2015-09-01

    The study area, situated near the city of Wrocław in southwest Poland, is part of the hydrogeological system of the Quaternary/Neogene MGB 319, inclusive of a buried valley of high water potential, named the Bogdaszowice structure. This structure is an alternative source of water supply for the Wrocław city area. Numerical modelling is the most effective tool in establishing a groundwater protection strategy for Major Groundwater Basins (MGBs) in complex aquifer systems. In the present study, the first step was to assess the hydrodynamic conditions of the Radakowice groundwater intake by analyses of head contours, pathlines, average flow times and capture zones of particular wells. Subsequently, these results were used in combination with other data and compiled as GIS layers. The spatial distribution of hydraulic conductivity was based on the lithology of surface sediments. Other data sets such as the thickness of the unsaturated zone, average soil moisture and infiltration rate were taken either directly from the model or were calculated. Based on the input data obtained, vertical flow time calculations for every model cell were made. The final outcome is a map of the protection zone for the aquifer system of the MGB 319.

  15. Accurate adjoint design sensitivities for nano metal optics.

    PubMed

    Hansen, Paul; Hesselink, Lambertus

    2015-09-01

    We present a method for obtaining accurate numerical design sensitivities for metal-optical nanostructures. Adjoint design sensitivity analysis, long used in fluid mechanics and mechanical engineering for both optimization and structural analysis, is beginning to be used for nano-optics design, but it fails for sharp-cornered metal structures because the numerical error in electromagnetic simulations of metal structures is highest at sharp corners. These locations feature strong field enhancement and contribute strongly to design sensitivities. By using high-accuracy FEM calculations and rounding sharp features to a finite radius of curvature we obtain highly-accurate design sensitivities for 3D metal devices. To provide a bridge to the existing literature on adjoint methods in other fields, we derive the sensitivity equations for Maxwell's equations in the PDE framework widely used in fluid mechanics. PMID:26368483

  16. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  17. Accurate Mass Measurements in Proteomics

    SciTech Connect

    Liu, Tao; Belov, Mikhail E.; Jaitly, Navdeep; Qian, Weijun; Smith, Richard D.

    2007-08-01

    To understand different aspects of life at the molecular level, one would think that ideally all components of specific processes should be individually isolated and studied in details. Reductionist approaches, i.e., studying one biological event at a one-gene or one-protein-at-a-time basis, indeed have made significant contributions to our understanding of many basic facts of biology. However, these individual “building blocks” can not be visualized as a comprehensive “model” of the life of cells, tissues, and organisms, without using more integrative approaches.1,2 For example, the emerging field of “systems biology” aims to quantify all of the components of a biological system to assess their interactions and to integrate diverse types of information obtainable from this system into models that could explain and predict behaviors.3-6 Recent breakthroughs in genomics, proteomics, and bioinformatics are making this daunting task a reality.7-14 Proteomics, the systematic study of the entire complement of proteins expressed by an organism, tissue, or cell under a specific set of conditions at a specific time (i.e., the proteome), has become an essential enabling component of systems biology. While the genome of an organism may be considered static over short timescales, the expression of that genome as the actual gene products (i.e., mRNAs and proteins) is a dynamic event that is constantly changing due to the influence of environmental and physiological conditions. Exclusive monitoring of the transcriptomes can be carried out using high-throughput cDNA microarray analysis,15-17 however the measured mRNA levels do not necessarily correlate strongly with the corresponding abundances of proteins,18-20 The actual amount of functional proteins can be altered significantly and become independent of mRNA levels as a result of post-translational modifications (PTMs),21 alternative splicing,22,23 and protein turnover.24,25 Moreover, the functions of expressed

  18. AN INTEGRAL EQUATION REPRESENTATION OF WIDE-BAND ELECTROMAGNETIC SCATTERING BY THIN SHEETS

    EPA Science Inventory

    An efficient, accurate numerical modeling scheme has been developed, based on the integral equation solution to compute electromagnetic (EM) responses of thin sheets over a wide frequency band. The thin-sheet approach is useful for simulating the EM response of a fracture system ...

  19. Numeric simulation of plant signaling networks.

    PubMed

    Genoud, T; Trevino Santa Cruz, M B; Métraux, J P

    2001-08-01

    Plants have evolved an intricate signaling apparatus that integrates relevant information and allows an optimal response to environmental conditions. For instance, the coordination of defense responses against pathogens involves sophisticated molecular detection and communication systems. Multiple protection strategies may be deployed differentially by the plant according to the nature of the invading organism. These responses are also influenced by the environment, metabolism, and developmental stage of the plant. Though the cellular signaling processes traditionally have been described as linear sequences of events, it is now evident that they may be represented more accurately as network-like structures. The emerging paradigm can be represented readily with the use of Boolean language. This digital (numeric) formalism allows an accurate qualitative description of the signal transduction processes, and a dynamic representation through computer simulation. Moreover, it provides the required power to process the increasing amount of information emerging from the fields of genomics and proteomics, and from the use of new technologies such as microarray analysis. In this review, we have used the Boolean language to represent and analyze part of the signaling network of disease resistance in Arabidopsis. PMID:11500542

  20. Quantifying Numerical Dissipation due to Filtering in Implicit LES

    NASA Astrophysics Data System (ADS)

    Cadieux, Francois; Domaradzki, Julian Andrzej

    2015-11-01

    Numerical dissipation plays an important role in LES and has given rise to the widespread use of implicit LES in the academic community. Recent results demonstrate that even with higher order codes, the use of stabilizing filters can act as a source of numerical dissipation strong enough to compare to an explicit subgrid-scale model (Cadieux et al., JFE 136-6). The amount of numerical dissipation added by such filtering operation in the simulation of a laminar separation bubble is quantified using a new method developed by Schranner et al., Computers & Fluids 114. It is then compared to a case where the filter is turned off, as well as the subgrid-scale dissipation that would be added by the σ model. The sensitivity of the method to the choice of subdomain location and size is explored. The effect of different derivative approximations and integration methods is also scrutinized. The method is shown to be robust and accurate for large subdomains. Results show that without filtering, numerical dissipation in the high order code is negligible, and that the filtering operation at the resolution considered adds substantial numerical dissipation in the same regions and at a similar rate as the σ subgrid-scale model would. NSF grant CBET-1233160.

  1. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  2. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  3. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  4. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  5. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  6. Accurate free energy calculation along optimized paths.

    PubMed

    Chen, Changjun; Xiao, Yi

    2010-05-01

    The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation.

  7. Dynamical Approach Study of Spurious Numerics in Nonlinear Computations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Mansour, Nagi (Technical Monitor)

    2002-01-01

    The last two decades have been an era when computation is ahead of analysis and when very large scale practical computations are increasingly used in poorly understood multiscale complex nonlinear physical problems and non-traditional fields. Ensuring a higher level of confidence in the predictability and reliability (PAR) of these numerical simulations could play a major role in furthering the design, understanding, affordability and safety of our next generation air and space transportation systems, and systems for planetary and atmospheric sciences, and in understanding the evolution and origin of life. The need to guarantee PAR becomes acute when computations offer the ONLY way of solving these types of data limited problems. Employing theory from nonlinear dynamical systems, some building blocks to ensure a higher level of confidence in PAR of numerical simulations have been revealed by the author and world expert collaborators in relevant fields. Five building blocks with supporting numerical examples were discussed. The next step is to utilize knowledge gained by including nonlinear dynamics, bifurcation and chaos theories as an integral part of the numerical process. The third step is to design integrated criteria for reliable and accurate algorithms that cater to the different multiscale nonlinear physics. This includes but is not limited to the construction of appropriate adaptive spatial and temporal discretizations that are suitable for the underlying governing equations. In addition, a multiresolution wavelets approach for adaptive numerical dissipation/filter controls for high speed turbulence, acoustics and combustion simulations will be sought. These steps are corner stones for guarding against spurious numerical solutions that are solutions of the discretized counterparts but are not solutions of the underlying governing equations.

  8. Learning numerical progressions.

    PubMed

    Vitz, P C; Hazan, D N

    1974-01-01

    Learning of simple numerical progressions and compound progressions formed by combining two or three simple progressions is investigated. In two experiments, time to solution was greater for compound vs simple progressions; greater the higher the progression's solution level; and greater if the progression consisted of large vs small numbers. A set of strategies is proposed to account for progression learning based on the assumption S computes differences between integers, differences between differences, etc., in a hierarchical fashion. Two measures of progression difficulty, each a summary of the strategies, are proposed; C1 is a count of the number of differences needed to solve a progression; C2 is the same count with higher level differences given more weight. The measures accurately predict in both experiments the mean time to solve 16 different progressions with C2 being somewhat superior. The measures also predict the learning difficulty of 10 other progressions reported by Bjork (1968).

  9. Numerical solution methods for viscoelastic orthotropic materials

    NASA Technical Reports Server (NTRS)

    Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.

    1988-01-01

    Numerical solution methods for viscoelastic orthotropic materials, specifically fiber reinforced composite materials, are examined. The methods include classical lamination theory using time increments, direction solution of the Volterra Integral, Zienkiewicz's linear Prony series method, and a new method called Nonlinear Differential Equation Method (NDEM) which uses a nonlinear Prony series. The criteria used for comparison of the various methods include the stability of the solution technique, time step size stability, computer solution time length, and computer memory storage. The Volterra Integral allowed the implementation of higher order solution techniques but had difficulties solving singular and weakly singular compliance function. The Zienkiewicz solution technique, which requires the viscoelastic response to be modeled by a Prony series, works well for linear viscoelastic isotropic materials and small time steps. The new method, NDEM, uses a modified Prony series which allows nonlinear stress effects to be included and can be used with orthotropic nonlinear viscoelastic materials. The NDEM technique is shown to be accurate and stable for both linear and nonlinear conditions with minimal computer time.

  10. Accurate pressure gradient calculations in hydrostatic atmospheric models

    NASA Technical Reports Server (NTRS)

    Carroll, John J.; Mendez-Nunez, Luis R.; Tanrikulu, Saffet

    1987-01-01

    A method for the accurate calculation of the horizontal pressure gradient acceleration in hydrostatic atmospheric models is presented which is especially useful in situations where the isothermal surfaces are not parallel to the vertical coordinate surfaces. The present method is shown to be exact if the potential temperature lapse rate is constant between the vertical pressure integration limits. The technique is applied to both the integration of the hydrostatic equation and the computation of the slope correction term in the horizontal pressure gradient. A fixed vertical grid and a dynamic grid defined by the significant levels in the vertical temperature distribution are employed.

  11. An accurate and efficient Lagrangian sub-grid model for multi-particle dispersion

    NASA Astrophysics Data System (ADS)

    Toschi, Federico; Mazzitelli, Irene; Lanotte, Alessandra S.

    2014-11-01

    Many natural and industrial processes involve the dispersion of particle in turbulent flows. Despite recent theoretical progresses in the understanding of particle dynamics in simple turbulent flows, complex geometries often call for numerical approaches based on eulerian Large Eddy Simulation (LES). One important issue related to the Lagrangian integration of tracers in under-resolved velocity fields is connected to the lack of spatial correlations at unresolved scales. Here we propose a computationally efficient Lagrangian model for the sub-grid velocity of tracers dispersed in statistically homogeneous and isotropic turbulent flows. The model incorporates the multi-scale nature of turbulent temporal and spatial correlations that are essential to correctly reproduce the dynamics of multi-particle dispersion. The new model is able to describe the Lagrangian temporal and spatial correlations in clouds of particles. In particular we show that pairs and tetrads dispersion compare well with results from Direct Numerical Simulations of statistically isotropic and homogeneous 3d turbulence. This model may offer an accurate and efficient way to describe multi-particle dispersion in under resolved turbulent velocity fields such as the one employed in eulerian LES. This work is part of the research programmes FP112 of the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organisation for Scientific Research (NWO). We acknowledge support from the EU COST Action MP0806.

  12. Hydroforming Of Patchwork Blanks — Numerical Modeling And Experimental Validation

    NASA Astrophysics Data System (ADS)

    Lamprecht, Klaus; Merklein, Marion; Geiger, Manfred

    2005-08-01

    In comparison to the commonly applied technology of tailored blanks the concept of patchwork blanks offers a number of additional advantages. Potential application areas for patchwork blanks in automotive industry are e.g. local reinforcements of automotive closures, structural reinforcements of rails and pillars as well as shock towers. But even if there is a significant application potential for patchwork blanks in automobile production, industrial realization of this innovative technique is decelerated due to a lack of knowledge regarding the forming behavior and the numerical modeling of patchwork blanks. Especially for the numerical simulation of hydroforming processes, where one part of the forming tool is replaced by a fluid under pressure, advanced modeling techniques are required to ensure an accurate prediction of the blanks' forming behavior. The objective of this contribution is to provide an appropriate model for the numerical simulation of patchwork blanks' forming processes. Therefore, different finite element modeling techniques for patchwork blanks are presented. In addition to basic shell element models a combined finite element model consisting of shell and solid elements is defined. Special emphasis is placed on the modeling of the weld seam. For this purpose the local mechanical properties of the weld metal, which have been determined by means of Martens-hardness measurements and uniaxial tensile tests, are integrated in the finite element models. The results obtained from the numerical simulations are compared to experimental data from a hydraulic bulge test. In this context the focus is laid on laser- and spot-welded patchwork blanks.

  13. Benchmarking accurate spectral phase retrieval of single attosecond pulses

    NASA Astrophysics Data System (ADS)

    Wei, Hui; Le, Anh-Thu; Morishita, Toru; Yu, Chao; Lin, C. D.

    2015-02-01

    A single extreme-ultraviolet (XUV) attosecond pulse or pulse train in the time domain is fully characterized if its spectral amplitude and phase are both determined. The spectral amplitude can be easily obtained from photoionization of simple atoms where accurate photoionization cross sections have been measured from, e.g., synchrotron radiations. To determine the spectral phase, at present the standard method is to carry out XUV photoionization in the presence of a dressing infrared (IR) laser. In this work, we examine the accuracy of current phase retrieval methods (PROOF and iPROOF) where the dressing IR is relatively weak such that photoelectron spectra can be accurately calculated by second-order perturbation theory. We suggest a modified method named swPROOF (scattering wave phase retrieval by omega oscillation filtering) which utilizes accurate one-photon and two-photon dipole transition matrix elements and removes the approximations made in PROOF and iPROOF. We show that the swPROOF method can in general retrieve accurate spectral phase compared to other simpler models that have been suggested. We benchmark the accuracy of these phase retrieval methods through simulating the spectrogram by solving the time-dependent Schrödinger equation numerically using several known single attosecond pulses with a fixed spectral amplitude but different spectral phases.

  14. A new approach to constructing efficient stiffly accurate EPIRK methods

    NASA Astrophysics Data System (ADS)

    Rainwater, G.; Tokman, M.

    2016-10-01

    The structural flexibility of the exponential propagation iterative methods of Runge-Kutta type (EPIRK) enables construction of particularly efficient exponential time integrators. While the EPIRK methods have been shown to perform well on stiff problems, all of the schemes proposed up to now have been derived using classical order conditions. In this paper we extend the stiff order conditions and the convergence theory developed for the exponential Rosenbrock methods to the EPIRK integrators. We derive stiff order conditions for the EPIRK methods and develop algorithms to solve them to obtain specific schemes. Moreover, we propose a new approach to constructing particularly efficient EPIRK integrators that are optimized to work with an adaptive Krylov algorithm. We use a set of numerical examples to illustrate the computational advantages that the newly constructed EPIRK methods offer compared to previously proposed exponential integrators.

  15. Detection and accurate localization of harmonic chipless tags

    NASA Astrophysics Data System (ADS)

    Dardari, Davide

    2015-12-01

    We investigate the detection and localization properties of harmonic tags working at microwave frequencies. A two-tone interrogation signal and a dedicated signal processing scheme at the receiver are proposed to eliminate phase ambiguities caused by the short signal wavelength and to provide accurate distance/position estimation even in the presence of clutter and multipath. The theoretical limits on tag detection and localization accuracy are investigated starting from a concise characterization of harmonic backscattered signals. Numerical results show that accuracies in the order of centimeters are feasible within an operational range of a few meters in the RFID UHF band.

  16. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.

    PubMed

    Fuchs, Franz G; Hjelmervik, Jon M

    2016-02-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results. PMID:26731454

  17. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.

    PubMed

    Fuchs, Franz G; Hjelmervik, Jon M

    2016-02-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results.

  18. Elliptic integrals: Symmetry and symbolic integration

    SciTech Connect

    Carlson, B.C. |

    1997-12-31

    Computation of elliptic integrals, whether numerical or symbolic, has been aided by the contributions of Italian mathematicians. Tricomi had a strong interest in iterative algorithms for computing elliptic integrals and other special functions, and his writings on elliptic functions and elliptic integrals have taught these subjects to many modern readers (including the author). The theory of elliptic integrals began with Fagnano`s duplication theorem, a generalization of which is now used iteratively for numerical computation in major software libraries. One of Lauricella`s multivariate hypergeometric functions has been found to contain all elliptic integrals as special cases and has led to the introduction of symmetric canonical forms. These forms provide major economies in new integral tables and offer a significant advantage also for symbolic integration of elliptic integrals. Although partly expository the present paper includes some new proofs and proposes a new procedure for symbolic integration.

  19. Unconditionally stable, second-order accurate schemes for solid state phase transformations driven by mechano-chemical spinodal decomposition

    DOE PAGES

    Sagiyama, Koki; Rudraraju, Shiva; Garikipati, Krishna

    2016-09-13

    Here, we consider solid state phase transformations that are caused by free energy densities with domains of non-convexity in strain-composition space; we refer to the non-convex domains as mechano-chemical spinodals. The non-convexity with respect to composition and strain causes segregation into phases with different crystal structures. We work on an existing model that couples the classical Cahn-Hilliard model with Toupin’s theory of gradient elasticity at finite strains. Both systems are represented by fourth-order, nonlinear, partial differential equations. The goal of this work is to develop unconditionally stable, second-order accurate time-integration schemes, motivated by the need to carry out large scalemore » computations of dynamically evolving microstructures in three dimensions. We also introduce reduced formulations naturally derived from these proposed schemes for faster computations that are still second-order accurate. Although our method is developed and analyzed here for a specific class of mechano-chemical problems, one can readily apply the same method to develop unconditionally stable, second-order accurate schemes for any problems for which free energy density functions are multivariate polynomials of solution components and component gradients. Apart from an analysis and construction of methods, we present a suite of numerical results that demonstrate the schemes in action.« less

  20. Unconditionally stable, second-order accurate schemes for solid state phase transformations driven by mechano-chemical spinodal decomposition

    NASA Astrophysics Data System (ADS)

    Sagiyama, K.; Rudraraju, S.; Garikipati, K.

    2016-11-01

    We consider solid state phase transformations that are caused by free energy densities with domains of non-convexity in strain-composition space; we refer to the non-convex domains as mechano-chemical spinodals. The non-convexity with respect to composition and strain causes segregation into phases with different crystal structures. We work on an existing model that couples the classical Cahn-Hilliard model with Toupin's theory of gradient elasticity at finite strains. Both systems are represented by fourth-order, nonlinear, partial differential equations. The goal of this work is to develop unconditionally stable, second-order accurate time-integration schemes, motivated by the need to carry out large scale computations of dynamically evolving microstructures in three dimensions. We also introduce reduced formulations naturally derived from these proposed schemes for faster computations that are still second-order accurate. Although our method is developed and analyzed here for a specific class of mechano-chemical problems, one can readily apply the same method to develop unconditionally stable, second-order accurate schemes for any problems for which free energy density functions are multivariate polynomials of solution components and component gradients. Apart from an analysis and construction of methods, we present a suite of numerical results that demonstrate the schemes in action.

  1. A Time-Accurate Upwind Unstructured Finite Volume Method for Compressible Flow with Cure of Pathological Behaviors

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; Jorgenson, Philip C. E.

    2007-01-01

    A time-accurate, upwind, finite volume method for computing compressible flows on unstructured grids is presented. The method is second order accurate in space and time and yields high resolution in the presence of discontinuities. For efficiency, the Roe approximate Riemann solver with an entropy correction is employed. In the basic Euler/Navier-Stokes scheme, many concepts of high order upwind schemes are adopted: the surface flux integrals are carefully treated, a Cauchy-Kowalewski time-stepping scheme is used in the time-marching stage, and a multidimensional limiter is applied in the reconstruction stage. However even with these up-to-date improvements, the basic upwind scheme is still plagued by the so-called "pathological behaviors," e.g., the carbuncle phenomenon, the expansion shock, etc. A solution to these limitations is presented which uses a very simple dissipation model while still preserving second order accuracy. This scheme is referred to as the enhanced time-accurate upwind (ETAU) scheme in this paper. The unstructured grid capability renders flexibility for use in complex geometry; and the present ETAU Euler/Navier-Stokes scheme is capable of handling a broad spectrum of flow regimes from high supersonic to subsonic at very low Mach number, appropriate for both CFD (computational fluid dynamics) and CAA (computational aeroacoustics). Numerous examples are included to demonstrate the robustness of the methods.

  2. Integrated Urban Dispersion Modeling Capability

    SciTech Connect

    Kosovic, B; Chan, S T

    2003-11-03

    Numerical simulations represent a unique predictive tool for developing a detailed understanding of three-dimensional flow fields and associated concentration distributions from releases in complex urban settings (Britter and Hanna 2003). The accurate and timely prediction of the atmospheric dispersion of hazardous materials in densely populated urban areas is a critical homeland and national security need for emergency preparedness, risk assessment, and vulnerability studies. The main challenges in high-fidelity numerical modeling of urban dispersion are the accurate prediction of peak concentrations, spatial extent and temporal evolution of harmful levels of hazardous materials, and the incorporation of detailed structural geometries. Current computational tools do not include all the necessary elements to accurately represent hazardous release events in complex urban settings embedded in high-resolution terrain. Nor do they possess the computational efficiency required for many emergency response and event reconstruction applications. We are developing a new integrated urban dispersion modeling capability, able to efficiently predict dispersion in diverse urban environments for a wide range of atmospheric conditions, temporal and spatial scales, and release event scenarios. This new computational fluid dynamics capability includes adaptive mesh refinement and it can simultaneously resolve individual buildings and high-resolution terrain (including important vegetative and land-use features), treat complex building and structural geometries (e.g., stadiums, arenas, subways, airplane interiors), and cope with the full range of atmospheric conditions (e.g. stability). We are developing approaches for seamless coupling with mesoscale numerical weather prediction models to provide realistic forcing of the urban-scale model, which is critical to its performance in real-world conditions.

  3. Quadrature methods for periodic singular and weakly singular Fredholm integral equations

    NASA Technical Reports Server (NTRS)

    Sidi, Avram; Israeli, Moshe

    1988-01-01

    High-accuracy numerical quadrature methods for integrals of singular periodic functions are proposed. These methods are based on the appropriate Euler-Maclaurin expansions of trapezoidal rule approximations and their extrapolations. They are subsequently used to obtain accurate quadrature methods for the solution of singular and weakly singular Fredholm integral equations. Throughout the development the periodic nature of the problem plays a crucial role. Such periodic equations are used in the solution of planar elliptic boundary value problems such as those that arise in elasticity, potential theory, conformal mapping, and free surface flows. The use of the quadrature methods is demonstrated with numerical examples.

  4. NUMERICAL SOLUTION FOR THE POTENTIAL AND DENSITY PROFILE OF A THERMAL EQUILIBRIUM SHEET BEAM

    SciTech Connect

    Bazouin, Steven M. Lund, Guillaume; Bazouin, Guillaume

    2011-04-01

    In a recent paper, S. M. Lund, A. Friedman, and G. Bazouin, Sheet beam model for intense space-charge: with application to Debye screening and the distribution of particle oscillation frequencies in a thermal equilibrium beam, in press, Phys. Rev. Special Topics - Accel. and Beams (2011), a 1D sheet beam model was extensively analyzed. In this complementary paper, we present details of a numerical procedure developed to construct the self-consistent electrostatic potential and density profile of a thermal equilibrium sheet beam distribution. This procedure effectively circumvents pathologies which can prevent use of standard numerical integration techniques when space-charge intensity is high. The procedure employs transformations and is straightforward to implement with standard numerical methods and produces accurate solutions which can be applied to thermal equilibria with arbitrarily strong space-charge intensity up to the applied focusing limit.

  5. NUMERICAL SOLUTION FOR THE POTENTIAL AND DENSITY PROFILE OF A THERMAL EQUILIBRIUM SHEET BEAM

    SciTech Connect

    Lund, S M; Bazouin, G

    2011-03-29

    In a recent paper, S. M. Lund, A. Friedman, and G. Bazouin, Sheet beam model for intense space-charge: with application to Debye screening and the distribution of particle oscillation frequencies in a thermal equilibrium beam, in press, Phys. Rev. Special Topics - Accel. and Beams (2011), a 1D sheet beam model was extensively analyzed. In this complementary paper, we present details of a numerical procedure developed to construct the self-consistent electrostatic potential and density profile of a thermal equilibrium sheet beam distribution. This procedure effectively circumvents pathologies which can prevent use of standard numerical integration techniques when space-charge intensity is high. The procedure employs transformations and is straightforward to implement with standard numerical methods and produces accurate solutions which can be applied to thermal equilibria with arbitrarily strong space-charge intensity up to the applied focusing limit.

  6. Numerical Simulation of a High Mach Number Jet Flow

    NASA Technical Reports Server (NTRS)

    Hayder, M. Ehtesham; Turkel, Eli; Mankbadi, Reda R.

    1993-01-01

    The recent efforts to develop accurate numerical schemes for transition and turbulent flows are motivated, among other factors, by the need for accurate prediction of flow noise. The success of developing high speed civil transport plane (HSCT) is contingent upon our understanding and suppression of the jet exhaust noise. The radiated sound can be directly obtained by solving the full (time-dependent) compressible Navier-Stokes equations. However, this requires computational storage that is beyond currently available machines. This difficulty can be overcome by limiting the solution domain to the near field where the jet is nonlinear and then use acoustic analogy (e.g., Lighthill) to relate the far-field noise to the near-field sources. The later requires obtaining the time-dependent flow field. The other difficulty in aeroacoustics computations is that at high Reynolds numbers the turbulent flow has a large range of scales. Direct numerical simulations (DNS) cannot obtain all the scales of motion at high Reynolds number of technological interest. However, it is believed that the large scale structure is more efficient than the small-scale structure in radiating noise. Thus, one can model the small scales and calculate the acoustically active scales. The large scale structure in the noise-producing initial region of the jet can be viewed as a wavelike nature, the net radiated sound is the net cancellation after integration over space. As such, aeroacoustics computations are highly sensitive to errors in computing the sound sources. It is therefore essential to use a high-order numerical scheme to predict the flow field. The present paper presents the first step in a ongoing effort to predict jet noise. The emphasis here is in accurate prediction of the unsteady flow field. We solve the full time-dependent Navier-Stokes equations by a high order finite difference method. Time accurate spatial simulations of both plane and axisymmetric jet are presented. Jet Mach

  7. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  8. Long-term dynamic modeling of tethered spacecraft using nodal position finite element method and symplectic integration

    NASA Astrophysics Data System (ADS)

    Li, G. Q.; Zhu, Z. H.

    2015-12-01

    Dynamic modeling of tethered spacecraft with the consideration of elasticity of tether is prone to the numerical instability and error accumulation over long-term numerical integration. This paper addresses the challenges by proposing a globally stable numerical approach with the nodal position finite element method (NPFEM) and the implicit, symplectic, 2-stage and 4th order Gaussian-Legendre Runge-Kutta time integration. The NPFEM eliminates the numerical error accumulation by using the position instead of displacement of tether as the state variable, while the symplectic integration enforces the energy and momentum conservation of the discretized finite element model to ensure the global stability of numerical solution. The effectiveness and robustness of the proposed approach is assessed by an elastic pendulum problem, whose dynamic response resembles that of tethered spacecraft, in comparison with the commonly used time integrators such as the classical 4th order Runge-Kutta schemes and other families of non-symplectic Runge-Kutta schemes. Numerical results show that the proposed approach is accurate and the energy of the corresponding numerical model is conservative over the long-term numerical integration. Finally, the proposed approach is applied to the dynamic modeling of deorbiting process of tethered spacecraft over a long period.

  9. Numerical Propulsion System Simulation

    NASA Technical Reports Server (NTRS)

    Naiman, Cynthia

    2006-01-01

    The NASA Glenn Research Center, in partnership with the aerospace industry, other government agencies, and academia, is leading the effort to develop an advanced multidisciplinary analysis environment for aerospace propulsion systems called the Numerical Propulsion System Simulation (NPSS). NPSS is a framework for performing analysis of complex systems. The initial development of NPSS focused on the analysis and design of airbreathing aircraft engines, but the resulting NPSS framework may be applied to any system, for example: aerospace, rockets, hypersonics, power and propulsion, fuel cells, ground based power, and even human system modeling. NPSS provides increased flexibility for the user, which reduces the total development time and cost. It is currently being extended to support the NASA Aeronautics Research Mission Directorate Fundamental Aeronautics Program and the Advanced Virtual Engine Test Cell (AVETeC). NPSS focuses on the integration of multiple disciplines such as aerodynamics, structure, and heat transfer with numerical zooming on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS development includes capabilities to facilitate collaborative engineering. The NPSS will provide improved tools to develop custom components and to use capability for zooming to higher fidelity codes, coupling to multidiscipline codes, transmitting secure data, and distributing simulations across different platforms. These powerful capabilities extend NPSS from a zero-dimensional simulation tool to a multi-fidelity, multidiscipline system-level simulation tool for the full development life cycle.

  10. High order integral equation method for diffraction gratings.

    PubMed

    Lu, Wangtao; Lu, Ya Yan

    2012-05-01

    Conventional integral equation methods for diffraction gratings require lattice sum techniques to evaluate quasi-periodic Green's functions. The boundary integral equation Neumann-to-Dirichlet map (BIE-NtD) method in Wu and Lu [J. Opt. Soc. Am. A 26, 2444 (2009)], [J. Opt. Soc. Am. A 28, 1191 (2011)] is a recently developed integral equation method that avoids the quasi-periodic Green's functions and is relatively easy to implement. In this paper, we present a number of improvements for this method, including a revised formulation that is more stable numerically, and more accurate methods for computing tangential derivatives along material interfaces and for matching boundary conditions with the homogeneous top and bottom regions. Numerical examples indicate that the improved BIE-NtD map method achieves a high order of accuracy for in-plane and conical diffractions of dielectric gratings.

  11. Groundtruth approach to accurate quantitation of fluorescence microarrays

    SciTech Connect

    Mascio-Kegelmeyer, L; Tomascik-Cheeseman, L; Burnett, M S; van Hummelen, P; Wyrobek, A J

    2000-12-01

    To more accurately measure fluorescent signals from microarrays, we calibrated our acquisition and analysis systems by using groundtruth samples comprised of known quantities of red and green gene-specific DNA probes hybridized to cDNA targets. We imaged the slides with a full-field, white light CCD imager and analyzed them with our custom analysis software. Here we compare, for multiple genes, results obtained with and without preprocessing (alignment, color crosstalk compensation, dark field subtraction, and integration time). We also evaluate the accuracy of various image processing and analysis techniques (background subtraction, segmentation, quantitation and normalization). This methodology calibrates and validates our system for accurate quantitative measurement of microarrays. Specifically, we show that preprocessing the images produces results significantly closer to the known ground-truth for these samples.

  12. Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping

    NASA Astrophysics Data System (ADS)

    Rehak, M.; Skaloud, J.

    2015-08-01

    In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  13. Asymptotic expansions of the generalized Epstein-Hubbell integral

    NASA Astrophysics Data System (ADS)

    Lopez, Jose L.; Ferreira, Chelo

    2002-06-01

    The generalized Epstein-Hubbell integral recently introduced by Kalla & Tuan (Comput. Math. Applic. 32, 1996) is considered for values of the variable k close to its upper limit k = 1. Distributional approach is used for deriving two convergent expansions of this integral in increasing powers of 1 - k2. For certain values of the parameters, one of these expansions involves also a logarithmic term in the asymptotic variable 1 - k2. Coefficients of these expansions are given in terms of the Appell function and its derivative. All the expansions are accompanied by an error bound at any order of the approximation. Numerical experiments show that this bound is considerably accurate.

  14. Integrative neuroscience.

    PubMed

    Gordon, Evian

    2003-07-01

    A fundamental impediment to an "Integrative Neuroscience" is the sense that scientists building models at one particular scale often see that scale as the epicentre of all brain function. This fragmentation has begun to change in a very distinctive way. Multidisciplinary efforts have provided the impetus to break down the boundaries and encourage a freer exchange of information across disciplines and scales. Despite huge deficits of knowledge, sufficient facts about the brain already exist, for an Integrative Neuroscience to begin to lift us clear of the jungle of detail, and shed light upon the workings of the brain as a system. Integrations of brain theory can be tested using judicious paradigm designs and measurement of temporospatial activity reflected in brain imaging technologies. However, to test realistically these new hypotheses requires consistent findings of the normative variability in very large numbers of control subjects, coupled with high sensitivity and specificity of findings in psychiatric disorders. Most importantly, these findings need to be analyzed and modeled with respect to the fundamental mechanisms underlying these measures. Without this convergence of theory, databases, and methodology (including across scale physiologically realistic numerical models), the clinical utility of brain imaging technologies in psychiatry will be significantly impeded. The examples provided in this paper of integration of theory, temporospatial integration of neuroimaging technologies, and a numerical simulation of brain function, bear testimony to the ongoing conversion of an Integrative Neuroscience from an exemplar status into reality.

  15. Accurate thermoelastic tensor and acoustic velocities of NaCl

    NASA Astrophysics Data System (ADS)

    Marcondes, Michel L.; Shukla, Gaurav; da Silveira, Pedro; Wentzcovitch, Renata M.

    2015-12-01

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  16. Accurate thermoelastic tensor and acoustic velocities of NaCl

    SciTech Connect

    Marcondes, Michel L.; Shukla, Gaurav; Silveira, Pedro da; Wentzcovitch, Renata M.

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  17. Numerical Relativity and Astrophysics

    NASA Astrophysics Data System (ADS)

    Lehner, Luis; Pretorius, Frans

    2014-08-01

    Throughout the Universe many powerful events are driven by strong gravitational effects that require general relativity to fully describe them. These include compact binary mergers, black hole accretion, and stellar collapse, where velocities can approach the speed of light and extreme gravitational fields (ΦNewt/c2≃1) mediate the interactions. Many of these processes trigger emission across a broad range of the electromagnetic spectrum. Compact binaries further source strong gravitational wave emission that could directly be detected in the near future. This feat will open up a gravitational wave window into our Universe and revolutionize our understanding of it. Describing these phenomena requires general relativity, and—where dynamical effects strongly modify gravitational fields—the full Einstein equations coupled to matter sources. Numerical relativity is a field within general relativity concerned with studying such scenarios that cannot be accurately modeled via perturbative or analytical calculations. In this review, we examine results obtained within this discipline, with a focus on its impact in astrophysics.

  18. Numerical methods for molecular dynamics

    SciTech Connect

    Skeel, R.D.

    1991-01-01

    This report summarizes our research progress to date on the use of multigrid methods for three-dimensional elliptic partial differential equations, with particular emphasis on application to the Poisson-Boltzmann equation of molecular biophysics. This research is motivated by the need for fast and accurate numerical solution techniques for three-dimensional problems arising in physics and engineering. In many applications these problems must be solved repeatedly, and the extremely large number of discrete unknowns required to accurately approximate solutions to partial differential equations in three-dimensional regions necessitates the use of efficient solution methods. This situation makes clear the importance of developing methods which are of optimal order (or nearly so), meaning that the number of operations required to solve the discrete problem is on the order of the number of discrete unknowns. Multigrid methods are generally regarded as being in this class of methods, and are in fact provably optimal order for an increasingly large class of problems. The fundamental goal of this research is to develop a fast and accurate numerical technique, based on multi-level principles, for the solutions of the Poisson-Boltzmann equation of molecular biophysics and similar equations occurring in other applications. An outline of the report is as follows. We first present some background material, followed by a survey of the literature on the use of multigrid methods for solving problems similar to the Poisson-Boltzmann equation. A short description of the software we have developed so far is then given, and numerical results are discussed. Finally, our research plans for the coming year are presented.

  19. Updated Integrated Mission Program

    NASA Technical Reports Server (NTRS)

    Dauro, Vincent A., Sr.

    2003-01-01

    Integrated Mission Program (IMP) is a computer program for simulating spacecraft missions around the Earth, Moon, Mars, and/or other large bodies. IMP solves the differential equations of motion by use of a Runge-Kutta numerical-integration algorithm. Users control missions through selection from a large menu of events and maneuvers. Mission profiles, time lines, propellant requirements, feasibility analyses, and perturbation analyses can be computed quickly and accurately. A prior version of IMP, written in FORTRAN 77, was reported in Program Simulates Spacecraft Missions (MFS-28606), NASA Tech Briefs, Vol. 17, No. 4 (April 1993), page 60. The present version, written in double-precision Lahey FORTRAN 90, incorporates a number of improvements over the prior version. Some of the improvements modernize the code to take advantage of today's greater central-processing-unit speeds. Other improvements render the code more modular; provide additional input, output, and debugging capabilities; and add to the variety of maneuvers, events, and means of propulsion that can be simulated. The IMP user manuals (of which there are now ten, each addressing a different aspect of the code and its use) have been updated accordingly.

  20. Pre-Stall Behavior of a Transonic Axial Compressor Stage via Time-Accurate Numerical Simulation

    NASA Technical Reports Server (NTRS)

    Chen, Jen-Ping; Hathaway, Michael D.; Herrick, Gregory P.

    2008-01-01

    CFD calculations using high-performance parallel computing were conducted to simulate the pre-stall flow of a transonic compressor stage, NASA compressor Stage 35. The simulations were run with a full-annulus grid that models the 3D, viscous, unsteady blade row interaction without the need for an artificial inlet distortion to induce stall. The simulation demonstrates the development of the rotating stall from the growth of instabilities. Pressure-rise performance and pressure traces are compared with published experimental data before the study of flow evolution prior to the rotating stall. Spatial FFT analysis of the flow indicates a rotating long-length disturbance of one rotor circumference, which is followed by a spike-type breakdown. The analysis also links the long-length wave disturbance with the initiation of the spike inception. The spike instabilities occur when the trajectory of the tip clearance flow becomes perpendicular to the axial direction. When approaching stall, the passage shock changes from a single oblique shock to a dual-shock, which distorts the perpendicular trajectory of the tip clearance vortex but shows no evidence of flow separation that may contribute to stall.

  1. Numerical Methods of Computational Electromagnetics for Complex Inhomogeneous Systems

    SciTech Connect

    Cai, Wei

    2014-05-15

    Understanding electromagnetic phenomena is the key in many scientific investigation and engineering designs such as solar cell designs, studying biological ion channels for diseases, and creating clean fusion energies, among other things. The objectives of the project are to develop high order numerical methods to simulate evanescent electromagnetic waves occurring in plasmon solar cells and biological ion-channels, where local field enhancement within random media in the former and long range electrostatic interactions in the latter are of major challenges for accurate and efficient numerical computations. We have accomplished these objectives by developing high order numerical methods for solving Maxwell equations such as high order finite element basis for discontinuous Galerkin methods, well-conditioned Nedelec edge element method, divergence free finite element basis for MHD, and fast integral equation methods for layered media. These methods can be used to model the complex local field enhancement in plasmon solar cells. On the other hand, to treat long range electrostatic interaction in ion channels, we have developed image charge based method for a hybrid model in combining atomistic electrostatics and continuum Poisson-Boltzmann electrostatics. Such a hybrid model will speed up the molecular dynamics simulation of transport in biological ion-channels.

  2. Numerical Simulation of Cocontinuous Blends

    NASA Astrophysics Data System (ADS)

    Kim, Junseok; Lowengrub, John

    2004-11-01

    In strongly sheared emulsions, experiments (Galloway and Macosko 2002) have shown that systems consisting of one continuous (matrix) and one dispersed (drops) phase may undergo a coalescence cascade leading to a system in which both phases are continuous, (sponge-like). Such configurations may have desirable mechanical and electrical properties and thus have wide ranging applications. Using a new and improved diffuse-inteface method (accurate surface tension force formulation, volume-preservation, and efficient nonlinear multigrid solver) developed by Kim and Lowengrub 2004, we perform numerical simulations of cocontinuous blends and determine the conditions for formation. We also characterize their rheology.

  3. Using XTE as Part of the IPN to Derive Accurate GRB Locations

    NASA Technical Reports Server (NTRS)

    Barthelmy, S.

    1998-01-01

    The objective of this final report was to integrate the Rossi X-Ray Timing Explorer PCA into the 3rd Interplanetary Network of gamma-ray burst detectors, to allow more bursts to be detected and accurately localized. Although the necessary software was implemented to do this at Goddard and at UC Berkeley, several factors made a full integration impossible or impractical.

  4. Numerical simulation of heat exchanger

    SciTech Connect

    Sha, W.T.

    1985-01-01

    Accurate and detailed knowledge of the fluid flow field and thermal distribution inside a heat exchanger becomes invaluable as a large, efficient, and reliable unit is sought. This information is needed to provide proper evaluation of the thermal and structural performance characteristics of a heat exchanger. It is to be noted that an analytical prediction method, when properly validated, will greatly reduce the need for model testing, facilitate interpolating and extrapolating test data, aid in optimizing heat-exchanger design and performance, and provide scaling capability. Thus tremendous savings of cost and time are realized. With the advent of large digital computers and advances in the development of computational fluid mechanics, it has become possible to predict analytically, through numerical solution, the conservation equations of mass, momentum, and energy for both the shellside and tubeside fluids. The numerical modeling technique will be a valuable, cost-effective design tool for development of advanced heat exchangers.

  5. Modified chemiluminescent NO analyzer accurately measures NOX

    NASA Technical Reports Server (NTRS)

    Summers, R. L.

    1978-01-01

    Installation of molybdenum nitric oxide (NO)-to-higher oxides of nitrogen (NOx) converter in chemiluminescent gas analyzer and use of air purge allow accurate measurements of NOx in exhaust gases containing as much as thirty percent carbon monoxide (CO). Measurements using conventional analyzer are highly inaccurate for NOx if as little as five percent CO is present. In modified analyzer, molybdenum has high tolerance to CO, and air purge substantially quenches NOx destruction. In test, modified chemiluminescent analyzer accurately measured NO and NOx concentrations for over 4 months with no denegration in performance.

  6. A critical comparison of the numerical solution of the 1D filtered Vlasov-Poisson equation

    NASA Astrophysics Data System (ADS)

    Viñas, A. F.; Klimas, A. J.

    2003-04-01

    We present a comparison of the numerical solution of the filtered Vlasov-Poisson system of equations using the Fourier-Fourier and the Flux-Balance-MacCormack methods in the electrostatic, non-relativistic case. We show that the splitting method combined with the Flux-Balance-MacCormack scheme provides an efficient and accurate scheme for integrating the filtered Vlasov-Poisson system in their self-consistent field. Finally we present various typical problems of interest in plasma physics research which can be studied with the scheme presented here.

  7. Numerical time-dependent solutions of the Schrödinger equation with piecewise continuous potentials

    NASA Astrophysics Data System (ADS)

    van Dijk, Wytse

    2016-06-01

    We consider accurate numerical solutions of the one-dimensional time-dependent Schrödinger equation when the potential is piecewise continuous. Spatial step sizes are defined for each of the regions between the discontinuities and a matching condition at the boundaries of the regions is employed. The Numerov method for spatial integration is particularly appropriate to this approach. By employing Padé approximants for the time-evolution operator, we obtain solutions with significantly improved precision without increased CPU time. This approach is also appropriate for adaptive changes in spatial step size even when there is no discontinuity of the potential.

  8. A parallel high-order accurate finite element nonlinear Stokes ice sheet model and benchmark experiments

    SciTech Connect

    Leng, Wei; Ju, Lili; Gunzburger, Max; Price, Stephen; Ringler, Todd

    2012-01-01

    The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.

  9. Entropy Splitting for High Order Numerical Simulation of Compressible Turbulence

    NASA Technical Reports Server (NTRS)

    Sandham, N. D.; Yee, H. C.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    A stable high order numerical scheme for direct numerical simulation (DNS) of shock-free compressible turbulence is presented. The method is applicable to general geometries. It contains no upwinding, artificial dissipation, or filtering. Instead the method relies on the stabilizing mechanisms of an appropriate conditioning of the governing equations and the use of compatible spatial difference operators for the interior points (interior scheme) as well as the boundary points (boundary scheme). An entropy splitting approach splits the inviscid flux derivatives into conservative and non-conservative portions. The spatial difference operators satisfy a summation by parts condition leading to a stable scheme (combined interior and boundary schemes) for the initial boundary value problem using a generalized energy estimate. A Laplacian formulation of the viscous and heat conduction terms on the right hand side of the Navier-Stokes equations is used to ensure that any tendency to odd-even decoupling associated with central schemes can be countered by the fluid viscosity. A special formulation of the continuity equation is used, based on similar arguments. The resulting methods are able to minimize spurious high frequency oscillation producing nonlinear instability associated with pure central schemes, especially for long time integration simulation such as DNS. For validation purposes, the methods are tested in a DNS of compressible turbulent plane channel flow at a friction Mach number of 0.1 where a very accurate turbulence data base exists. It is demonstrated that the methods are robust in terms of grid resolution, and in good agreement with incompressible channel data, as expected at this Mach number. Accurate turbulence statistics can be obtained with moderate grid sizes. Stability limits on the range of the splitting parameter are determined from numerical tests.

  10. Effects of polydispersity and anisotropy in colloidal and protein solutions: an integral equation approach.

    PubMed

    Gazzillo, Domenico; Giacometti, Achille

    2011-12-01

    Application of integral equation theory to complex fluids is reviewed, with particular emphasis to the effects of polydispersity and anisotropy on their structural and thermodynamic properties. Both analytical and numerical solutions of integral equations are discussed within the context of a set of minimal potential models that have been widely used in the literature. While other popular theoretical tools, such as numerical simulations and density functional theory, are superior for quantitative and accurate predictions, we argue that integral equation theory still provides, as in simple fluids, an invaluable technique that is able to capture the main essential features of a complex system, at a much lower computational cost. In addition, it can provide a detailed description of the angular dependence in arbitrary frame, unlike numerical simulations where this information is frequently hampered by insufficient statistics. Applications to colloidal mixtures, globular proteins and patchy colloids are discussed, within a unified framework.

  11. Traction boundary integral equation (BIE) formulations and applications to nonplanar and multiple cracks

    NASA Astrophysics Data System (ADS)

    Cruse, Thomas A.; Novati, Giorgio

    The hypersingular Somigliana identity for the stress tensor is used as the basis for a traction boundary integral equation (BIE) suitable for numerical application to nonplanar cracks and to multiple cracks. The variety of derivations of hypersingular traction BIE formulations is reviewed and extended for this problem class. Numerical implementation is accomplished for piecewise-flat models of curved cracks, using local coordinate system integrations. A nonconforming, triangular boundary element implementation of the integral equations is given. Demonstration problems include several three-dimensional approximations to plane-strain fracture mechanics problems, for which exact or highly accurate numerical solutions exist. In all cases, the use of a piecewise-flat traction BIE implementation is shown to give excellent results.

  12. The Numerical Analysis of a Turbulent Compressible Jet. Degree awarded by Ohio State Univ., 2000

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2001-01-01

    A numerical method to simulate high Reynolds number jet flows was formulated and applied to gain a better understanding of the flow physics. Large-eddy simulation was chosen as the most promising approach to model the turbulent structures due to its compromise between accuracy and computational expense. The filtered Navier-Stokes equations were developed including a total energy form of the energy equation. Subgrid scale models for the momentum and energy equations were adapted from compressible forms of Smagorinsky's original model. The effect of using disparate temporal and spatial accuracy in a numerical scheme was discovered through one-dimensional model problems and a new uniformly fourth-order accurate numerical method was developed. Results from two- and three-dimensional validation exercises show that the code accurately reproduces both viscous and inviscid flows. Numerous axisymmetric jet simulations were performed to investigate the effect of grid resolution, numerical scheme, exit boundary conditions and subgrid scale modeling on the solution and the results were used to guide the three-dimensional calculations. Three-dimensional calculations of a Mach 1.4 jet showed that this LES simulation accurately captures the physics of the turbulent flow. The agreement with experimental data was relatively good and is much better than results in the current literature. Turbulent intensities indicate that the turbulent structures at this level of modeling are not isotropic and this information could lend itself to the development of improved subgrid scale models for LES and turbulence models for RANS simulations. A two point correlation technique was used to quantify the turbulent structures. Two point space correlations were used to obtain a measure of the integral length scale, which proved to be approximately 1/2 D(sub j). Two point space-time correlations were used to obtain the convection velocity for the turbulent structures. This velocity ranged from 0.57 to

  13. Numerical simulation of high speed chemically reacting flows

    NASA Astrophysics Data System (ADS)

    Schuricht, Scott Richard

    A single step second-order accurate flux-difference-splitting method has been developed for solving unsteady quasi-one-dimensional and two-dimensional flows of multispecies fluids with finite rate chemistry. A systematic method for incorporating the source term effects into the wave strength parameters of Roe's linearized approximate Riemann solver is presented that is consistent with characteristic theory. The point implicit technique is utilized to achieve second-order time accuracy of the local area source term The stiffness associated with the chemical reactions is removed by implicitly integrating the kinetics system using the LSODE package. From the implicit integration, values of the species production rates are developed and incorporated into the flux-difference-splitting framework using a source term projection and splitting technique that preserves the upwind nature of source terms. Numerous validation studies are presented to illustrate the capability of the numerical method. Shock tube and converging-diverging nozzle cases show the method is second order accurate in space and time for one-dimensional flows. A supersonic source flow case and a subsonic sink flow case show the method is second order spatially accurate for two-dimensional flows. Static combustion and steady supersonic combustion cases illustrate the ability of the method to accurately capture the ignition delay for hydrogen-air mixtures. Demonstration studies are presented to illustrate the capabilities of the method. One-dimensional flow in a shock tube predicts species dissociation behind the main shock wave. One-dimension flow in supersonic nozzles predicts the well-known chemical freezing effect in an expanding flow. Two-dimensional cases consisted of a model of a scramjet combustor and a rocket motor nozzle. A parametric study was performed on a model of a scramjet combustor. The parameters studied were; wall angle, inlet Mach number, inlet temperature, and inlet equivalence ratio

  14. Reduced integral order 3D scalar wave integral equation Derivation and BEM approach

    NASA Astrophysics Data System (ADS)

    Lee, HyunSuk

    The Boundary Element Method (BEM) is a numerical method to solve partial differential equations (PDEs), which is derived from the integral equation (IE) that can be developed from certain PDEs. Among IEs, the 3D transient wave integral equation has a very special property which makes it distinguished from other integral equations; Dirac-delta and its derivative delta‧ appear in the fundamental-solution (or kernel-function). These delta and delta‧ generalized functions have continuity C-2 and C-3, respectively, and become a major hurdle for BEM implementation, because many numerical methods including BEM are based on the idea of continuity. More specifically, the integrands (kernel - shape function products) in the 3D transient wave IE become discontinuous (C-2 and C-3) and make numerical integration difficult. There are several existing approaches to overcome the delta difficulty, but none use the character of the Dirac-delta to cancel the integral. In this dissertation, a new method called the "Reduced order wave integral equation (Reduced IE)" is developed to deal with the difficulty in the 3D transient wave problem. In this approach, the sifting properties of delta and delta‧ are used to cancel an integration. As a result, smooth integrands are derived and the integral orders are reduced by one. Smooth integrands result in the more efficient and accurate numerical integration. In addition, there is no more coupling between the space-element size and time-step size. Non-zero initial condition (IC) can be considered also. Furthermore, space integrals need to be performed once, not per time-step. All of this reduces dramatically the computational requirement. As a result, the computation order for both time and space are reduced by 1 and one obtains an O(M N2) method, where M is the number of time steps and N is the number of spatial nodes on the boundary of the problem domain. A numerical approach to deal with the reduced IE is also suggested, and a simple

  15. Can Appraisers Rate Work Performance Accurately?

    ERIC Educational Resources Information Center

    Hedge, Jerry W.; Laue, Frances J.

    The ability of individuals to make accurate judgments about others is examined and literature on this subject is reviewed. A wide variety of situational factors affects the appraisal of performance. It is generally accepted that the purpose of the appraisal influences the accuracy of the appraiser. The instrumentation, or tools, available to the…

  16. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  17. Sledge-Hammer Integration

    ERIC Educational Resources Information Center

    Ahner, Henry

    2009-01-01

    Integration (here visualized as a pounding process) is mathematically realized by simple transformations, successively smoothing the bounding curve into a straight line and the region-to-be-integrated into an area-equivalent rectangle. The relationship to Riemann sums, and to the trapezoid and midpoint methods of numerical integration, is…

  18. Potential flow around two-dimensional airfoils using a singular integral method

    NASA Technical Reports Server (NTRS)

    Nguyen, Yves; Wilson, Dennis

    1987-01-01

    The problem of potential flow around two-dimensional airfoils is solved by using a new singular integral method. The potential flow equations for incompressible potential flow are written in a singular integral equation. The equation is solved at N collocation points on the airfoil surface. A unique feature of this method is that the airfoil geometry is specified as an independent variable in the exact integral equation. Compared to other numerical methods, the present calculation procedure is much simpler and gives remarkable accuracy for many body shapes. An advantage of the present method is that it allows the inverse design calculation and the results are extremely accurate.

  19. Can blind persons accurately assess body size from the voice?

    PubMed

    Pisanski, Katarzyna; Oleszkiewicz, Anna; Sorokowska, Agnieszka

    2016-04-01

    Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20-65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. PMID:27095264

  20. Numerical simulations of cryogenic cavitating flows

    NASA Astrophysics Data System (ADS)

    Kim, Hyunji; Kim, Hyeongjun; Min, Daeho; Kim, Chongam

    2015-12-01

    The present study deals with a numerical method for cryogenic cavitating flows. Recently, we have developed an accurate and efficient baseline numerical scheme for all-speed water-gas two-phase flows. By extending such progress, we modify the numerical dissipations to be properly scaled so that it does not show any deficiencies in low Mach number regions. For dealing with cryogenic two-phase flows, previous EOS-dependent shock discontinuity sensing term is replaced with a newly designed EOS-free one. To validate the proposed numerical method, cryogenic cavitating flows around hydrofoil are computed and the pressure and temperature depression effect in cryogenic cavitation are demonstrated. Compared with Hord's experimental data, computed results are turned out to be satisfactory. Afterwards, numerical simulations of flow around KARI turbopump inducer in liquid rocket are carried out under various flow conditions with water and cryogenic fluids, and the difference in inducer flow physics depending on the working fluids are examined.

  1. Numerically Controlled Machining Of Wind-Tunnel Models

    NASA Technical Reports Server (NTRS)

    Kovtun, John B.

    1990-01-01

    New procedure for dynamic models and parts for wind-tunnel tests or radio-controlled flight tests constructed. Involves use of single-phase numerical control (NC) technique to produce highly-accurate, symmetrical models in less time.

  2. Nonlinear dynamics and numerical uncertainties in CFD

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.

    1996-01-01

    The application of nonlinear dynamics to improve the understanding of numerical uncertainties in computational fluid dynamics (CFD) is reviewed. Elementary examples in the use of dynamics to explain the nonlinear phenomena and spurious behavior that occur in numerics are given. The role of dynamics in the understanding of long time behavior of numerical integrations and the nonlinear stability, convergence, and reliability of using time-marching, approaches for obtaining steady-state numerical solutions in CFD is explained. The study is complemented with spurious behavior observed in CFD computations.

  3. Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions

    NASA Astrophysics Data System (ADS)

    McCullough, Christopher; Bettadpur, Srinivas

    2015-04-01

    In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.

  4. Tensor numerical methods in quantum chemistry: from Hartree-Fock to excitation energies.

    PubMed

    Khoromskaia, Venera; Khoromskij, Boris N

    2015-12-21

    We resume the recent successes of the grid-based tensor numerical methods and discuss their prospects in real-space electronic structure calculations. These methods, based on the low-rank representation of the multidimensional functions and integral operators, first appeared as an accurate tensor calculus for the 3D Hartree potential using 1D complexity operations, and have evolved to entirely grid-based tensor-structured 3D Hartree-Fock eigenvalue solver. It benefits from tensor calculation of the core Hamiltonian and two-electron integrals (TEI) in O(n log n) complexity using the rank-structured approximation of basis functions, electron densities and convolution integral operators all represented on 3D n × n × n Cartesian grids. The algorithm for calculating TEI tensor in a form of the Cholesky decomposition is based on multiple factorizations using algebraic 1D "density fitting" scheme, which yield an almost irreducible number of product basis functions involved in the 3D convolution integrals, depending on a threshold ε > 0. The basis functions are not restricted to separable Gaussians, since the analytical integration is substituted by high-precision tensor-structured numerical quadratures. The tensor approaches to post-Hartree-Fock calculations for the MP2 energy correction and for the Bethe-Salpeter excitation energies, based on using low-rank factorizations and the reduced basis method, were recently introduced. Another direction is towards the tensor-based Hartree-Fock numerical scheme for finite lattices, where one of the numerical challenges is the summation of electrostatic potentials of a large number of nuclei. The 3D grid-based tensor method for calculation of a potential sum on a L × L × L lattice manifests the linear in L computational work, O(L), instead of the usual O(L(3) log L) scaling by the Ewald-type approaches. PMID:26016539

  5. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  6. Numerical experiments for advection equation

    SciTech Connect

    Sun, Wen-Yih )

    1993-10-01

    We propose to combine the Crowley fourth-order scheme and the Gadd scheme for solving the linear advection equation. Two new schemes will be presented: the first is to integrate the Crowley scheme and the Gadd scheme alternately (referred to as New1); the second is to integrate the Crowley scheme twice before we apply the Gadd scheme once (referred to as New2). The new schemes are designed such that no additional restriction is placed on the CFL criterion in an integration. The performance of the new schemes is better than that of the original Crowley or Gadd schemes. It is noted that the amplitude obtained from New2 is more accurate than that from New1 for long waves, but less accurate for short waves. The phase speed calculated from New2 is very close to the real phase speed in most cases tested here, but the phase speed of New 1 is faster than the real phase speed. Hence, New2 is a better choice, especially for a model that includes horizontal smoothing to dampen the short waves. 9 refs., 5 figs., 8 tabs.

  7. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    PubMed

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  8. Direct computation of parameters for accurate polarizable force fields

    SciTech Connect

    Verstraelen, Toon Vandenbrande, Steven; Ayers, Paul W.

    2014-11-21

    We present an improved electronic linear response model to incorporate polarization and charge-transfer effects in polarizable force fields. This model is a generalization of the Atom-Condensed Kohn-Sham Density Functional Theory (DFT), approximated to second order (ACKS2): it can now be defined with any underlying variational theory (next to KS-DFT) and it can include atomic multipoles and off-center basis functions. Parameters in this model are computed efficiently as expectation values of an electronic wavefunction, obviating the need for their calibration, regularization, and manual tuning. In the limit of a complete density and potential basis set in the ACKS2 model, the linear response properties of the underlying theory for a given molecular geometry are reproduced exactly. A numerical validation with a test set of 110 molecules shows that very accurate models can already be obtained with fluctuating charges and dipoles. These features greatly facilitate the development of polarizable force fields.

  9. Two highly accurate methods for pitch calibration

    NASA Astrophysics Data System (ADS)

    Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.

    2009-11-01

    Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.

  10. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  11. Accurate Guitar Tuning by Cochlear Implant Musicians

    PubMed Central

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  12. Preparation and accurate measurement of pure ozone.

    PubMed

    Janssen, Christof; Simone, Daniela; Guinet, Mickaël

    2011-03-01

    Preparation of high purity ozone as well as precise and accurate measurement of its pressure are metrological requirements that are difficult to meet due to ozone decomposition occurring in pressure sensors. The most stable and precise transducer heads are heated and, therefore, prone to accelerated ozone decomposition, limiting measurement accuracy and compromising purity. Here, we describe a vacuum system and a method for ozone production, suitable to accurately determine the pressure of pure ozone by avoiding the problem of decomposition. We use an inert gas in a particularly designed buffer volume and can thus achieve high measurement accuracy and negligible degradation of ozone with purities of 99.8% or better. The high degree of purity is ensured by comprehensive compositional analyses of ozone samples. The method may also be applied to other reactive gases. PMID:21456766

  13. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  14. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  15. Line gas sampling system ensures accurate analysis

    SciTech Connect

    Not Available

    1992-06-01

    Tremendous changes in the natural gas business have resulted in new approaches to the way natural gas is measured. Electronic flow measurement has altered the business forever, with developments in instrumentation and a new sensitivity to the importance of proper natural gas sampling techniques. This paper reports that YZ Industries Inc., Snyder, Texas, combined its 40 years of sampling experience with the latest in microprocessor-based technology to develop the KynaPak 2000 series, the first on-line natural gas sampling system that is both compact and extremely accurate. This means the composition of the sampled gas must be representative of the whole and related to flow. If so, relative measurement and sampling techniques are married, gas volumes are accurately accounted for and adjustments to composition can be made.

  16. Accurate mask model for advanced nodes

    NASA Astrophysics Data System (ADS)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle

    2014-07-01

    Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.

  17. Pair correlation function integrals: Computation and use

    NASA Astrophysics Data System (ADS)

    Wedberg, Rasmus; O'Connell, John P.; Peters, Günther H.; Abildskov, Jens

    2011-08-01

    We describe a method for extending radial distribution functions obtained from molecular simulations of pure and mixed molecular fluids to arbitrary distances. The method allows total correlation function integrals to be reliably calculated from simulations of relatively small systems. The long-distance behavior of radial distribution functions is determined by requiring that the corresponding direct correlation functions follow certain approximations at long distances. We have briefly described the method and tested its performance in previous communications [R. Wedberg, J. P. O'Connell, G. H. Peters, and J. Abildskov, Mol. Simul. 36, 1243 (2010);, 10.1080/08927020903536366 Fluid Phase Equilib. 302, 32 (2011)], 10.1016/j.fluid.2010.10.004, but describe here its theoretical basis more thoroughly and derive long-distance approximations for the direct correlation functions. We describe the numerical implementation of the method in detail, and report numerical tests complementing previous results. Pure molecular fluids are here studied in the isothermal-isobaric ensemble with isothermal compressibilities evaluated from the total correlation function integrals and compared with values derived from volume fluctuations. For systems where the radial distribution function has structure beyond the sampling limit imposed by the system size, the integration is more reliable, and usually more accurate, than simple integral truncation.

  18. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-10-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  19. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-04-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  20. Accurate Assessment--Compelling Evidence for Practice

    ERIC Educational Resources Information Center

    Flynn, Regina T.; Anderson, Ludmila; Martin, Nancy R.

    2010-01-01

    Childhood overweight and obesity is a public health concern not just because of its growing prevalence but also for its serious and lasting health consequences. Though height and weight measures are easy to obtain and New Hampshire Head Start sites measure height and weight of their enrollees, there are numerous challenges related to accurate…

  1. Accurate Molecular Polarizabilities Based on Continuum Electrostatics

    PubMed Central

    Truchon, Jean-François; Nicholls, Anthony; Iftimie, Radu I.; Roux, Benoît; Bayly, Christopher I.

    2013-01-01

    A novel approach for representing the intramolecular polarizability as a continuum dielectric is introduced to account for molecular electronic polarization. It is shown, using a finite-difference solution to the Poisson equation, that the Electronic Polarization from Internal Continuum (EPIC) model yields accurate gas-phase molecular polarizability tensors for a test set of 98 challenging molecules composed of heteroaromatics, alkanes and diatomics. The electronic polarization originates from a high intramolecular dielectric that produces polarizabilities consistent with B3LYP/aug-cc-pVTZ and experimental values when surrounded by vacuum dielectric. In contrast to other approaches to model electronic polarization, this simple model avoids the polarizability catastrophe and accurately calculates molecular anisotropy with the use of very few fitted parameters and without resorting to auxiliary sites or anisotropic atomic centers. On average, the unsigned error in the average polarizability and anisotropy compared to B3LYP are 2% and 5%, respectively. The correlation between the polarizability components from B3LYP and this approach lead to a R2 of 0.990 and a slope of 0.999. Even the F2 anisotropy, shown to be a difficult case for existing polarizability models, can be reproduced within 2% error. In addition to providing new parameters for a rapid method directly applicable to the calculation of polarizabilities, this work extends the widely used Poisson equation to areas where accurate molecular polarizabilities matter. PMID:23646034

  2. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. PMID:27111139

  3. Accurate phase-shift velocimetry in rock

    NASA Astrophysics Data System (ADS)

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  4. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  5. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  6. An Accurate Temperature Correction Model for Thermocouple Hygrometers 1

    PubMed Central

    Savage, Michael J.; Cass, Alfred; de Jager, James M.

    1982-01-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241

  7. An accurate temperature correction model for thermocouple hygrometers.

    PubMed

    Savage, M J; Cass, A; de Jager, J M

    1982-02-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature.

  8. Ultra-accurate collaborative information filtering via directed user similarity

    NASA Astrophysics Data System (ADS)

    Guo, Q.; Song, W.-J.; Liu, J.-G.

    2014-07-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones in opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influence of mainstream preferences, we present the directed second-order CF (HDCF) algorithm specifically to address the challenge of accuracy and diversity of the CF algorithm. The numerical results for two benchmark data sets, MovieLens and Netflix, show that the accuracy of the new algorithm outperforms the state-of-the-art CF algorithms. Comparing with the CF algorithm based on random walks proposed by Liu et al. (Int. J. Mod. Phys. C, 20 (2009) 285) the average ranking score could reach 0.0767 and 0.0402, which is enhanced by 27.3% and 19.1% for MovieLens and Netflix, respectively. In addition, the diversity, precision and recall are also enhanced greatly. Without relying on any context-specific information, tuning the similarity direction of CF algorithms could obtain accurate and diverse recommendations. This work suggests that the user similarity direction is an important factor to improve the personalized recommendation performance.

  9. An accurate temperature correction model for thermocouple hygrometers.

    PubMed

    Savage, M J; Cass, A; de Jager, J M

    1982-02-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature. PMID:16662241

  10. All-electron formalism for total energy strain derivatives and stress tensor components for numeric atom-centered orbitals

    NASA Astrophysics Data System (ADS)

    Knuth, Franz; Carbogno, Christian; Atalla, Viktor; Blum, Volker; Scheffler, Matthias

    2015-05-01

    We derive and implement the strain derivatives of the total energy of solids, i.e., the analytic stress tensor components, in an all-electron, numeric atom-centered orbital based density-functional formalism. We account for contributions that arise in the semi-local approximation (LDA/GGA) as well as in the generalized Kohn-Sham case, in which a fraction of exact exchange (hybrid functionals) is included. In this work, we discuss the details of the implementation including the numerical corrections for sparse integrations grids which allow to produce accurate results. We validate the implementation for a variety of test cases by comparing to strain derivatives performed via finite differences. Additionally, we include the detailed definition of the overlapping atom-centered integration formalism used in this work to obtain total energies and their derivatives.

  11. Undulator Field Integral Measurements

    SciTech Connect

    Wolf, Zachary

    2010-12-07

    The LCLS undulator field integrals must be very small so that the beam trajectory slope and offset stay within tolerance. In order to make accurate measurements of the small field integrals, a long coil will be used. This note describes the design of the coil measurement system.

  12. Numerical Boundary Condition Procedures

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Topics include numerical procedures for treating inflow and outflow boundaries, steady and unsteady discontinuous surfaces, far field boundaries, and multiblock grids. In addition, the effects of numerical boundary approximations on stability, accuracy, and convergence rate of the numerical solution are discussed.

  13. Numerical simulation of hyperbolic heat conduction with convection boundary conditions and pulse heating effects

    NASA Technical Reports Server (NTRS)

    Glass, David E.; Tamma, Kumar K.; Railkar, Sudhir B.

    1989-01-01

    The paper describes the numerical simulation of hyperbolic heat conduction with convection boundary conditions. The effects of a step heat loading, a sudden pulse heat loading, and an internal heat source are considered in conjunction with convection boundary conditions. Two methods of solution are presened for predicting the transient behavior of the propagating thermal disturbances. In the first method, MacCormack's predictor-corrector method is employed for integrating the hyperbolic system of equations. Next, the transfinite element method, which employs specially tailored elements, is used for accurately representing the transient response of the propagating thermal wave fronts. The agreement between the results of various numerical test cases validate the representative behavior of the thermal wave fronts. Both methods represent hyperbolic heat conduction behavior by effectively modeling the sharp discontinuities of the propagating thermal disturbances.

  14. Accurately Mapping M31's Microlensing Population

    NASA Astrophysics Data System (ADS)

    Crotts, Arlin

    2004-07-01

    We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity

  15. Accurate measurement of unsteady state fluid temperature

    NASA Astrophysics Data System (ADS)

    Jaremkiewicz, Magdalena

    2016-07-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  16. The first accurate description of an aurora

    NASA Astrophysics Data System (ADS)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  17. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  18. Accurate density functional thermochemistry for larger molecules.

    SciTech Connect

    Raghavachari, K.; Stefanov, B. B.; Curtiss, L. A.; Lucent Tech.

    1997-06-20

    Density functional methods are combined with isodesmic bond separation reaction energies to yield accurate thermochemistry for larger molecules. Seven different density functionals are assessed for the evaluation of heats of formation, Delta H 0 (298 K), for a test set of 40 molecules composed of H, C, O and N. The use of bond separation energies results in a dramatic improvement in the accuracy of all the density functionals. The B3-LYP functional has the smallest mean absolute deviation from experiment (1.5 kcal mol/f).

  19. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material. PMID:11366835

  20. Numerical simulation of turbulent particle diffusion

    NASA Astrophysics Data System (ADS)

    Bocksell, Todd Leslie

    Understanding particle diffusion and dispersion in multi-phase flows is important to a variety of engineering environments. In the present study, a Continuous Random Walk (CRW) model was constructed that can predict turbulent particle diffusion based on commonly available turbulence statistical correlations that may be obtained with Reynolds-Averaged Navier Stokes (BANS) solutions. To evaluate this model, several test flows were considered including a theoretical channel flow, a wake flow, a jet flow, and a turbulent boundary layer. For the first three flows it was shown that proper correction of the Markov chain velocity fluctuations involving gradients in turbulence intensity significantly improved solution accuracy. For the turbulent boundary layer simulations, the flow is significantly more inhomogeneous (high gradients of turbulent kinetic energy and integral time-scale near the wall) and significantly more anisotropic (the root-mean-square of the velocity perturbations differs by several-fold depending on the direction). The particles were injected in the near-wall region for a Direct Numerical Simulation (DNS) and mean particle concentration profiles are obtained without the empiricism associated with RANS methods (turbulence modeling) or uncertainties associated with experiments (near-wall resolution difficulties). These results were compared to the CRW predictions that employed the mean turbulent statistics measured from the DNS results, so that a self-consistent comparison could be made. To accurately simulate particles in wall-bounded flows with the CRW model, a modified Markov chain based on a normalized velocity fluctuation was found to be important to avoid unphysical wall-ward particle fluxes. Also, the incremental drift velocity for the Markov chain (required for inhomogeneous turbulent flows) was extended to include effects of particle inertia and virtual mass to enable simulation for a wide range of Stokes numbers. The CRW results with the finite