Sample records for standard numerical methods

  1. Non-standard finite difference and Chebyshev collocation methods for solving fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Agarwal, P.; El-Sayed, A. A.

    2018-06-01

    In this paper, a new numerical technique for solving the fractional order diffusion equation is introduced. This technique basically depends on the Non-Standard finite difference method (NSFD) and Chebyshev collocation method, where the fractional derivatives are described in terms of the Caputo sense. The Chebyshev collocation method with the (NSFD) method is used to convert the problem into a system of algebraic equations. These equations solved numerically using Newton's iteration method. The applicability, reliability, and efficiency of the presented technique are demonstrated through some given numerical examples.

  2. An Artificial Neural Networks Method for Solving Partial Differential Equations

    NASA Astrophysics Data System (ADS)

    Alharbi, Abir

    2010-09-01

    While there already exists many analytical and numerical techniques for solving PDEs, this paper introduces an approach using artificial neural networks. The approach consists of a technique developed by combining the standard numerical method, finite-difference, with the Hopfield neural network. The method is denoted Hopfield-finite-difference (HFD). The architecture of the nets, energy function, updating equations, and algorithms are developed for the method. The HFD method has been used successfully to approximate the solution of classical PDEs, such as the Wave, Heat, Poisson and the Diffusion equations, and on a system of PDEs. The software Matlab is used to obtain the results in both tabular and graphical form. The results are similar in terms of accuracy to those obtained by standard numerical methods. In terms of speed, the parallel nature of the Hopfield nets methods makes them easier to implement on fast parallel computers while some numerical methods need extra effort for parallelization.

  3. Numerical Algorithm for Delta of Asian Option

    PubMed Central

    Zhang, Boxiang; Yu, Yang; Wang, Weiguo

    2015-01-01

    We study the numerical solution of the Greeks of Asian options. In particular, we derive a close form solution of Δ of Asian geometric option and use this analytical form as a control to numerically calculate Δ of Asian arithmetic option, which is known to have no explicit close form solution. We implement our proposed numerical method and compare the standard error with other classical variance reduction methods. Our method provides an efficient solution to the hedging strategy with Asian options. PMID:26266271

  4. Preface of "The Second Symposium on Border Zones Between Experimental and Numerical Application Including Solution Approaches By Extensions of Standard Numerical Methods"

    NASA Astrophysics Data System (ADS)

    Ortleb, Sigrun; Seidel, Christian

    2017-07-01

    In this second symposium at the limits of experimental and numerical methods, recent research is presented on practically relevant problems. Presentations discuss experimental investigation as well as numerical methods with a strong focus on application. In addition, problems are identified which require a hybrid experimental-numerical approach. Topics include fast explicit diffusion applied to a geothermal energy storage tank, noise in experimental measurements of electrical quantities, thermal fluid structure interaction, tensegrity structures, experimental and numerical methods for Chladni figures, optimized construction of hydroelectric power stations, experimental and numerical limits in the investigation of rain-wind induced vibrations as well as the application of exponential integrators in a domain-based IMEX setting.

  5. Fourier/Chebyshev methods for the incompressible Navier-Stokes equations in finite domains

    NASA Technical Reports Server (NTRS)

    Corral, Roque; Jimenez, Javier

    1992-01-01

    A fully spectral numerical scheme for the incompressible Navier-Stokes equations in domains which are infinite or semi-infinite in one dimension. The domain is not mapped, and standard Fourier or Chebyshev expansions can be used. The handling of the infinite domain does not introduce any significant overhead. The scheme assumes that the vorticity in the flow is essentially concentrated in a finite region, which is represented numerically by standard spectral collocation methods. To accomodate the slow exponential decay of the velocities at infinity, extra expansion functions are introduced, which are handled analytically. A detailed error analysis is presented, and two applications to Direct Numerical Simulation of turbulent flows are discussed in relation with the numerical performance of the scheme.

  6. Applications of numerical methods to simulate the movement of contaminants in groundwater.

    PubMed Central

    Sun, N Z

    1989-01-01

    This paper reviews mathematical models and numerical methods that have been extensively used to simulate the movement of contaminants through the subsurface. The major emphasis is placed on the numerical methods of advection-dominated transport problems and inverse problems. Several mathematical models that are commonly used in field problems are listed. A variety of numerical solutions for three-dimensional models are introduced, including the multiple cell balance method that can be considered a variation of the finite element method. The multiple cell balance method is easy to understand and convenient for solving field problems. When the advection transport dominates the dispersion transport, two kinds of numerical difficulties, overshoot and numerical dispersion, are always involved in solving standard, finite difference methods and finite element methods. To overcome these numerical difficulties, various numerical techniques are developed, such as upstream weighting methods and moving point methods. A complete review of these methods is given and we also mention the problems of parameter identification, reliability analysis, and optimal-experiment design that are absolutely necessary for constructing a practical model. PMID:2695327

  7. Numerical integration of discontinuous functions: moment fitting and smart octree

    NASA Astrophysics Data System (ADS)

    Hubrich, Simeon; Di Stolfo, Paolo; Kudela, László; Kollmannsberger, Stefan; Rank, Ernst; Schröder, Andreas; Düster, Alexander

    2017-11-01

    A fast and simple grid generation can be achieved by non-standard discretization methods where the mesh does not conform to the boundary or the internal interfaces of the problem. However, this simplification leads to discontinuous integrands for intersected elements and, therefore, standard quadrature rules do not perform well anymore. Consequently, special methods are required for the numerical integration. To this end, we present two approaches to obtain quadrature rules for arbitrary domains. The first approach is based on an extension of the moment fitting method combined with an optimization strategy for the position and weights of the quadrature points. In the second approach, we apply the smart octree, which generates curved sub-cells for the integration mesh. To demonstrate the performance of the proposed methods, we consider several numerical examples, showing that the methods lead to efficient quadrature rules, resulting in less integration points and in high accuracy.

  8. A direct Primitive Variable Recovery Scheme for hyperbolic conservative equations: The case of relativistic hydrodynamics.

    PubMed

    Aguayo-Ortiz, A; Mendoza, S; Olvera, D

    2018-01-01

    In this article we develop a Primitive Variable Recovery Scheme (PVRS) to solve any system of coupled differential conservative equations. This method obtains directly the primitive variables applying the chain rule to the time term of the conservative equations. With this, a traditional finite volume method for the flux is applied in order avoid violation of both, the entropy and "Rankine-Hugoniot" jump conditions. The time evolution is then computed using a forward finite difference scheme. This numerical technique evades the recovery of the primitive vector by solving an algebraic system of equations as it is often used and so, it generalises standard techniques to solve these kind of coupled systems. The article is presented bearing in mind special relativistic hydrodynamic numerical schemes with an added pedagogical view in the appendix section in order to easily comprehend the PVRS. We present the convergence of the method for standard shock-tube problems of special relativistic hydrodynamics and a graphical visualisation of the errors using the fluctuations of the numerical values with respect to exact analytic solutions. The PVRS circumvents the sometimes arduous computation that arises from standard numerical methods techniques, which obtain the desired primitive vector solution through an algebraic polynomial of the charges.

  9. A direct Primitive Variable Recovery Scheme for hyperbolic conservative equations: The case of relativistic hydrodynamics

    PubMed Central

    Mendoza, S.; Olvera, D.

    2018-01-01

    In this article we develop a Primitive Variable Recovery Scheme (PVRS) to solve any system of coupled differential conservative equations. This method obtains directly the primitive variables applying the chain rule to the time term of the conservative equations. With this, a traditional finite volume method for the flux is applied in order avoid violation of both, the entropy and “Rankine-Hugoniot” jump conditions. The time evolution is then computed using a forward finite difference scheme. This numerical technique evades the recovery of the primitive vector by solving an algebraic system of equations as it is often used and so, it generalises standard techniques to solve these kind of coupled systems. The article is presented bearing in mind special relativistic hydrodynamic numerical schemes with an added pedagogical view in the appendix section in order to easily comprehend the PVRS. We present the convergence of the method for standard shock-tube problems of special relativistic hydrodynamics and a graphical visualisation of the errors using the fluctuations of the numerical values with respect to exact analytic solutions. The PVRS circumvents the sometimes arduous computation that arises from standard numerical methods techniques, which obtain the desired primitive vector solution through an algebraic polynomial of the charges. PMID:29659602

  10. Evaluation of the Ross fast solution of Richards’ equation in unfavourable conditions for standard finite element methods

    NASA Astrophysics Data System (ADS)

    Crevoisier, David; Chanzy, André; Voltz, Marc

    2009-06-01

    Ross [Ross PJ. Modeling soil water and solute transport - fast, simplified numerical solutions. Agron J 2003;95:1352-61] developed a fast, simplified method for solving Richards' equation. This non-iterative 1D approach, using Brooks and Corey [Brooks RH, Corey AT. Hydraulic properties of porous media. Hydrol. papers, Colorado St. Univ., Fort Collins; 1964] hydraulic functions, allows a significant reduction in computing time while maintaining the accuracy of the results. The first aim of this work is to confirm these results in a more extensive set of problems, including those that would lead to serious numerical difficulties for the standard numerical method. The second aim is to validate a generalisation of the Ross method to other mathematical representations of hydraulic functions. The Ross method is compared with the standard finite element model, Hydrus-1D [Simunek J, Sejna M, Van Genuchten MTh. The HYDRUS-1D and HYDRUS-2D codes for estimating unsaturated soil hydraulic and solutes transport parameters. Agron Abstr 357; 1999]. Computing time, accuracy of results and robustness of numerical schemes are monitored in 1D simulations involving different types of homogeneous soils, grids and hydrological conditions. The Ross method associated with modified Van Genuchten hydraulic functions [Vogel T, Cislerova M. On the reliability of unsaturated hydraulic conductivity calculated from the moisture retention curve. Transport Porous Media 1988;3:1-15] proves in every tested scenario to be more robust numerically, and the compromise of computing time/accuracy is seen to be particularly improved on coarse grids. Ross method run from 1.25 to 14 times faster than Hydrus-1D.

  11. Nucleon-nucleon interactions via Lattice QCD: Methodology. HAL QCD approach to extract hadronic interactions in lattice QCD

    NASA Astrophysics Data System (ADS)

    Aoki, Sinya

    2013-07-01

    We review the potential method in lattice QCD, which has recently been proposed to extract nucleon-nucleon interactions via numerical simulations. We focus on the methodology of this approach by emphasizing the strategy of the potential method, the theoretical foundation behind it, and special numerical techniques. We compare the potential method with the standard finite volume method in lattice QCD, in order to make pros and cons of the approach clear. We also present several numerical results for nucleon-nucleon potentials.

  12. Discrete exterior calculus approach for discretizing Maxwell's equations on face-centered cubic grids for FDTD

    NASA Astrophysics Data System (ADS)

    Salmasi, Mahbod; Potter, Michael

    2018-07-01

    Maxwell's equations are discretized on a Face-Centered Cubic (FCC) lattice instead of a simple cubic as an alternative to the standard Yee method for improvements in numerical dispersion characteristics and grid isotropy of the method. Explicit update equations and numerical dispersion expressions, and the stability criteria are derived. Also, several tools available to the standard Yee method such as PEC/PMC boundary conditions, absorbing boundary conditions, and scattered field formulation are extended to this method as well. A comparison between the FCC and the Yee formulations is made, showing that the FCC method exhibits better dispersion compared to its Yee counterpart. Simulations are provided to demonstrate both the accuracy and grid isotropy improvement of the method.

  13. Eigensensitivity analysis of rotating clamped uniform beams with the asymptotic numerical method

    NASA Astrophysics Data System (ADS)

    Bekhoucha, F.; Rechak, S.; Cadou, J. M.

    2016-12-01

    In this paper, free vibrations of a rotating clamped Euler-Bernoulli beams with uniform cross section are studied using continuation method, namely asymptotic numerical method. The governing equations of motion are derived using Lagrange's method. The kinetic and strain energy expression are derived from Rayleigh-Ritz method using a set of hybrid variables and based on a linear deflection assumption. The derived equations are transformed in two eigenvalue problems, where the first is a linear gyroscopic eigenvalue problem and presents the coupled lagging and stretch motions through gyroscopic terms. While the second is standard eigenvalue problem and corresponds to the flapping motion. Those two eigenvalue problems are transformed into two functionals treated by continuation method, the Asymptotic Numerical Method. New method proposed for the solution of the linear gyroscopic system based on an augmented system, which transforms the original problem to a standard form with real symmetric matrices. By using some techniques to resolve these singular problems by the continuation method, evolution curves of the natural frequencies against dimensionless angular velocity are determined. At high angular velocity, some singular points, due to the linear elastic assumption, are computed. Numerical tests of convergence are conducted and the obtained results are compared to the exact values. Results obtained by continuation are compared to those computed with discrete eigenvalue problem.

  14. An implicit boundary integral method for computing electric potential of macromolecules in solvent

    NASA Astrophysics Data System (ADS)

    Zhong, Yimin; Ren, Kui; Tsai, Richard

    2018-04-01

    A numerical method using implicit surface representations is proposed to solve the linearized Poisson-Boltzmann equation that arises in mathematical models for the electrostatics of molecules in solvent. The proposed method uses an implicit boundary integral formulation to derive a linear system defined on Cartesian nodes in a narrowband surrounding the closed surface that separates the molecule and the solvent. The needed implicit surface is constructed from the given atomic description of the molecules, by a sequence of standard level set algorithms. A fast multipole method is applied to accelerate the solution of the linear system. A few numerical studies involving some standard test cases are presented and compared to other existing results.

  15. Numerical approximations for fractional diffusion equations via a Chebyshev spectral-tau method

    NASA Astrophysics Data System (ADS)

    Doha, Eid H.; Bhrawy, Ali H.; Ezz-Eldien, Samer S.

    2013-10-01

    In this paper, a class of fractional diffusion equations with variable coefficients is considered. An accurate and efficient spectral tau technique for solving the fractional diffusion equations numerically is proposed. This method is based upon Chebyshev tau approximation together with Chebyshev operational matrix of Caputo fractional differentiation. Such approach has the advantage of reducing the problem to the solution of a system of algebraic equations, which may then be solved by any standard numerical technique. We apply this general method to solve four specific examples. In each of the examples considered, the numerical results show that the proposed method is of high accuracy and is efficient for solving the time-dependent fractional diffusion equations.

  16. A three-term conjugate gradient method under the strong-Wolfe line search

    NASA Astrophysics Data System (ADS)

    Khadijah, Wan; Rivaie, Mohd; Mamat, Mustafa

    2017-08-01

    Recently, numerous studies have been concerned in conjugate gradient methods for solving large-scale unconstrained optimization method. In this paper, a three-term conjugate gradient method is proposed for unconstrained optimization which always satisfies sufficient descent direction and namely as Three-Term Rivaie-Mustafa-Ismail-Leong (TTRMIL). Under standard conditions, TTRMIL method is proved to be globally convergent under strong-Wolfe line search. Finally, numerical results are provided for the purpose of comparison.

  17. A new shock-capturing numerical scheme for ideal hydrodynamics

    NASA Astrophysics Data System (ADS)

    Fecková, Z.; Tomášik, B.

    2015-05-01

    We present a new algorithm for solving ideal relativistic hydrodynamics based on Godunov method with an exact solution of Riemann problem for an arbitrary equation of state. Standard numerical tests are executed, such as the sound wave propagation and the shock tube problem. Low numerical viscosity and high precision are attained with proper discretization.

  18. Numerical Optimization Using Computer Experiments

    NASA Technical Reports Server (NTRS)

    Trosset, Michael W.; Torczon, Virginia

    1997-01-01

    Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivative-free methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.

  19. Global Asymptotic Behavior of Iterative Implicit Schemes

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.

    1994-01-01

    The global asymptotic nonlinear behavior of some standard iterative procedures in solving nonlinear systems of algebraic equations arising from four implicit linear multistep methods (LMMs) in discretizing three models of 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODEs) is analyzed using the theory of dynamical systems. The iterative procedures include simple iteration and full and modified Newton iterations. The results are compared with standard Runge-Kutta explicit methods, a noniterative implicit procedure, and the Newton method of solving the steady part of the ODEs. Studies showed that aside from exhibiting spurious asymptotes, all of the four implicit LMMs can change the type and stability of the steady states of the differential equations (DEs). They also exhibit a drastic distortion but less shrinkage of the basin of attraction of the true solution than standard nonLMM explicit methods. The simple iteration procedure exhibits behavior which is similar to standard nonLMM explicit methods except that spurious steady-state numerical solutions cannot occur. The numerical basins of attraction of the noniterative implicit procedure mimic more closely the basins of attraction of the DEs and are more efficient than the three iterative implicit procedures for the four implicit LMMs. Contrary to popular belief, the initial data using the Newton method of solving the steady part of the DEs may not have to be close to the exact steady state for convergence. These results can be used as an explanation for possible causes and cures of slow convergence and nonconvergence of steady-state numerical solutions when using an implicit LMM time-dependent approach in computational fluid dynamics.

  20. Hybrid matrix method for stable numerical analysis of the propagation of Dirac electrons in gapless bilayer graphene superlattices

    NASA Astrophysics Data System (ADS)

    Briones-Torres, J. A.; Pernas-Salomón, R.; Pérez-Álvarez, R.; Rodríguez-Vargas, I.

    2016-05-01

    Gapless bilayer graphene (GBG), like monolayer graphene, is a material system with unique properties, such as anti-Klein tunneling and intrinsic Fano resonances. These properties rely on the gapless parabolic dispersion relation and the chiral nature of bilayer graphene electrons. In addition, propagating and evanescent electron states coexist inherently in this material, giving rise to these exotic properties. In this sense, bilayer graphene is unique, since in most material systems in which Fano resonance phenomena are manifested an external source that provides extended states is required. However, from a numerical standpoint, the presence of evanescent-divergent states in the eigenfunctions linear superposition representing the Dirac spinors, leads to a numerical degradation (the so called Ωd problem) in the practical applications of the standard Coefficient Transfer Matrix (K) method used to study charge transport properties in Bilayer Graphene based multi-barrier systems. We present here a straightforward procedure based in the hybrid compliance-stiffness matrix method (H) that can overcome this numerical degradation. Our results show that in contrast to standard matrix method, the proposed H method is suitable to study the transmission and transport properties of electrons in GBG superlattice since it remains numerically stable regardless the size of the superlattice and the range of values taken by the input parameters: the energy and angle of the incident electrons, the barrier height and the thickness and number of barriers. We show that the matrix determinant can be used as a test of the numerical accuracy in real calculations.

  1. Meshless collocation methods for the numerical solution of elliptic boundary valued problems the rotational shallow water equations on the sphere

    NASA Astrophysics Data System (ADS)

    Blakely, Christopher D.

    This dissertation thesis has three main goals: (1) To explore the anatomy of meshless collocation approximation methods that have recently gained attention in the numerical analysis community; (2) Numerically demonstrate why the meshless collocation method should clearly become an attractive alternative to standard finite-element methods due to the simplicity of its implementation and its high-order convergence properties; (3) Propose a meshless collocation method for large scale computational geophysical fluid dynamics models. We provide numerical verification and validation of the meshless collocation scheme applied to the rotational shallow-water equations on the sphere and demonstrate computationally that the proposed model can compete with existing high performance methods for approximating the shallow-water equations such as the SEAM (spectral-element atmospheric model) developed at NCAR. A detailed analysis of the parallel implementation of the model, along with the introduction of parallel algorithmic routines for the high-performance simulation of the model will be given. We analyze the programming and computational aspects of the model using Fortran 90 and the message passing interface (mpi) library along with software and hardware specifications and performance tests. Details from many aspects of the implementation in regards to performance, optimization, and stabilization will be given. In order to verify the mathematical correctness of the algorithms presented and to validate the performance of the meshless collocation shallow-water model, we conclude the thesis with numerical experiments on some standardized test cases for the shallow-water equations on the sphere using the proposed method.

  2. Efficient numerical method for analyzing optical bistability in photonic crystal microcavities.

    PubMed

    Yuan, Lijun; Lu, Ya Yan

    2013-05-20

    Nonlinear optical effects can be enhanced by photonic crystal microcavities and be used to develop practical ultra-compact optical devices with low power requirements. The finite-difference time-domain method is the standard numerical method for simulating nonlinear optical devices, but it has limitations in terms of accuracy and efficiency. In this paper, a rigorous and efficient frequency-domain numerical method is developed for analyzing nonlinear optical devices where the nonlinear effect is concentrated in the microcavities. The method replaces the linear problem outside the microcavities by a rigorous and numerically computed boundary condition, then solves the nonlinear problem iteratively in a small region around the microcavities. Convergence of the iterative method is much easier to achieve since the size of the problem is significantly reduced. The method is presented for a specific two-dimensional photonic crystal waveguide-cavity system with a Kerr nonlinearity, using numerical methods that can take advantage of the geometric features of the structure. The method is able to calculate multiple solutions exhibiting the optical bistability phenomenon in the strongly nonlinear regime.

  3. Extracting concrete thermal characteristics from temperature time history of RC column exposed to standard fire.

    PubMed

    Kim, Jung J; Youm, Kwang-Soo; Reda Taha, Mahmoud M

    2014-01-01

    A numerical method to identify thermal conductivity from time history of one-dimensional temperature variations in thermal unsteady-state is proposed. The numerical method considers the change of specific heat and thermal conductivity with respect to temperature. Fire test of reinforced concrete (RC) columns was conducted using a standard fire to obtain time history of temperature variations in the column section. A thermal equilibrium model in unsteady-state condition was developed. The thermal conductivity of concrete was then determined by optimizing the numerical solution of the model to meet the observed time history of temperature variations. The determined thermal conductivity with respect to temperature was then verified against standard thermal conductivity measurements of concrete bricks. It is concluded that the proposed method can be used to conservatively estimate thermal conductivity of concrete for design purpose. Finally, the thermal radiation properties of concrete for the RC column were estimated from the thermal equilibrium at the surface of the column. The radiant heat transfer ratio of concrete representing absorptivity to emissivity ratio of concrete during fire was evaluated and is suggested as a concrete criterion that can be used in fire safety assessment.

  4. Extracting Concrete Thermal Characteristics from Temperature Time History of RC Column Exposed to Standard Fire

    PubMed Central

    2014-01-01

    A numerical method to identify thermal conductivity from time history of one-dimensional temperature variations in thermal unsteady-state is proposed. The numerical method considers the change of specific heat and thermal conductivity with respect to temperature. Fire test of reinforced concrete (RC) columns was conducted using a standard fire to obtain time history of temperature variations in the column section. A thermal equilibrium model in unsteady-state condition was developed. The thermal conductivity of concrete was then determined by optimizing the numerical solution of the model to meet the observed time history of temperature variations. The determined thermal conductivity with respect to temperature was then verified against standard thermal conductivity measurements of concrete bricks. It is concluded that the proposed method can be used to conservatively estimate thermal conductivity of concrete for design purpose. Finally, the thermal radiation properties of concrete for the RC column were estimated from the thermal equilibrium at the surface of the column. The radiant heat transfer ratio of concrete representing absorptivity to emissivity ratio of concrete during fire was evaluated and is suggested as a concrete criterion that can be used in fire safety assessment. PMID:25180197

  5. Dynamic Environmental Qualification Techniques.

    DTIC Science & Technology

    1981-12-01

    environments peculiar to military operations and requirements. numerous dynamic qualification test methods have been established. It was the purpose...requires the achievement of the highest practicable degree in the standard- ization of items, materials and engineering practices within the...standard is described as "A document that established engineering and technical requirements for processes, pro’cedures, practices and methods that have

  6. High order volume-preserving algorithms for relativistic charged particles in general electromagnetic fields

    NASA Astrophysics Data System (ADS)

    He, Yang; Sun, Yajuan; Zhang, Ruili; Wang, Yulei; Liu, Jian; Qin, Hong

    2016-09-01

    We construct high order symmetric volume-preserving methods for the relativistic dynamics of a charged particle by the splitting technique with processing. By expanding the phase space to include the time t, we give a more general construction of volume-preserving methods that can be applied to systems with time-dependent electromagnetic fields. The newly derived methods provide numerical solutions with good accuracy and conservative properties over long time of simulation. Furthermore, because of the use of an accuracy-enhancing processing technique, the explicit methods obtain high-order accuracy and are more efficient than the methods derived from standard compositions. The results are verified by the numerical experiments. Linear stability analysis of the methods shows that the high order processed method allows larger time step size in numerical integrations.

  7. The h-p Version of the Finite Element Method for Problems with Nonhomogeneous Essential Boundary Condition.

    DTIC Science & Technology

    1987-09-01

    one commercial code based on the p and h-p version of the finite element, the program PROBE of NOETIC Technologies (St. Louis, MO). PROBE deals with two...Standards. o To be an international center of study and research for foreign students in numerical mathematics who are supported by foreign govern- ments or...ment agencies such as the National Bureau of Standards. o To be an international center of study and research for foreign students in numerical

  8. A methodology for highway asset valuation in Indiana.

    DOT National Transportation Integrated Search

    2012-11-01

    The Government Accounting Standards Board (GASB) requires transportation agencies to report the values of their tangible assets. : Numerous valuation methods exist which use different underlying concepts and data items. These traditional methods have...

  9. A modified form of conjugate gradient method for unconstrained optimization problems

    NASA Astrophysics Data System (ADS)

    Ghani, Nur Hamizah Abdul; Rivaie, Mohd.; Mamat, Mustafa

    2016-06-01

    Conjugate gradient (CG) methods have been recognized as an interesting technique to solve optimization problems, due to the numerical efficiency, simplicity and low memory requirements. In this paper, we propose a new CG method based on the study of Rivaie et al. [7] (Comparative study of conjugate gradient coefficient for unconstrained Optimization, Aus. J. Bas. Appl. Sci. 5(2011) 947-951). Then, we show that our method satisfies sufficient descent condition and converges globally with exact line search. Numerical results show that our proposed method is efficient for given standard test problems, compare to other existing CG methods.

  10. Immersed boundary-simplified lattice Boltzmann method for incompressible viscous flows

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Shu, C.; Tan, D.

    2018-05-01

    An immersed boundary-simplified lattice Boltzmann method is developed in this paper for simulations of two-dimensional incompressible viscous flows with immersed objects. Assisted by the fractional step technique, the problem is resolved in a predictor-corrector scheme. The predictor step solves the flow field without considering immersed objects, and the corrector step imposes the effect of immersed boundaries on the velocity field. Different from the previous immersed boundary-lattice Boltzmann method which adopts the standard lattice Boltzmann method (LBM) as the flow solver in the predictor step, a recently developed simplified lattice Boltzmann method (SLBM) is applied in the present method to evaluate intermediate flow variables. Compared to the standard LBM, SLBM requires lower virtual memories, facilitates the implementation of physical boundary conditions, and shows better numerical stability. The boundary condition-enforced immersed boundary method, which accurately ensures no-slip boundary conditions, is implemented as the boundary solver in the corrector step. Four typical numerical examples are presented to demonstrate the stability, the flexibility, and the accuracy of the present method.

  11. WATSFAR: numerical simulation of soil WATer and Solute fluxes using a FAst and Robust method

    NASA Astrophysics Data System (ADS)

    Crevoisier, David; Voltz, Marc

    2013-04-01

    To simulate the evolution of hydro- and agro-systems, numerous spatialised models are based on a multi-local approach and improvement of simulation accuracy by data-assimilation techniques are now used in many application field. The latest acquisition techniques provide a large amount of experimental data, which increase the efficiency of parameters estimation and inverse modelling approaches. In turn simulations are often run on large temporal and spatial domains which requires a large number of model runs. Eventually, despite the regular increase in computing capacities, the development of fast and robust methods describing the evolution of saturated-unsaturated soil water and solute fluxes is still a challenge. Ross (2003, Agron J; 95:1352-1361) proposed a method, solving 1D Richards' and convection-diffusion equation, that fulfil these characteristics. The method is based on a non iterative approach which reduces the numerical divergence risks and allows the use of coarser spatial and temporal discretisations, while assuring a satisfying accuracy of the results. Crevoisier et al. (2009, Adv Wat Res; 32:936-947) proposed some technical improvements and validated this method on a wider range of agro- pedo- climatic situations. In this poster, we present the simulation code WATSFAR which generalises the Ross method to other mathematical representations of soil water retention curve (i.e. standard and modified van Genuchten model) and includes a dual permeability context (preferential fluxes) for both water and solute transfers. The situations tested are those known to be the less favourable when using standard numerical methods: fine textured and extremely dry soils, intense rainfall and solute fluxes, soils near saturation, ... The results of WATSFAR have been compared with the standard finite element model Hydrus. The analysis of these comparisons highlights two main advantages for WATSFAR, i) robustness: even on fine textured soil or high water and solute fluxes - where Hydrus simulations may fail to converge - no numerical problem appears, and ii) accuracy of simulations even for loose spatial domain discretisations, which can only be obtained by Hydrus with fine discretisations.

  12. Numerical modeling of the acoustic wave propagation across a homogenized rigid microstructure in the time domain

    NASA Astrophysics Data System (ADS)

    Lombard, Bruno; Maurel, Agnès; Marigo, Jean-Jacques

    2017-04-01

    Homogenization of a thin micro-structure yields effective jump conditions that incorporate the geometrical features of the scatterers. These jump conditions apply across a thin but nonzero thickness interface whose interior is disregarded. This paper aims (i) to propose a numerical method able to handle the jump conditions in order to simulate the homogenized problem in the time domain, (ii) to inspect the validity of the homogenized problem when compared to the real one. For this purpose, we adapt the Explicit Simplified Interface Method originally developed for standard jump conditions across a zero-thickness interface. Doing so allows us to handle arbitrary-shaped interfaces on a Cartesian grid with the same efficiency and accuracy of the numerical scheme than those obtained in a homogeneous medium. Numerical experiments are performed to test the properties of the numerical method and to inspect the validity of the homogenization problem.

  13. 75 FR 15440 - Guidance for Industry on Standards for Securing the Drug Supply Chain-Standardized Numerical...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-29

    ...] Guidance for Industry on Standards for Securing the Drug Supply Chain--Standardized Numerical... industry entitled ``Standards for Securing the Drug Supply Chain-Standardized Numerical Identification for... the Drug Supply Chain-Standardized Numerical Identification for Prescription Drug Packages.'' In the...

  14. Multilevel Monte Carlo and improved timestepping methods in atmospheric dispersion modelling

    NASA Astrophysics Data System (ADS)

    Katsiolides, Grigoris; Müller, Eike H.; Scheichl, Robert; Shardlow, Tony; Giles, Michael B.; Thomson, David J.

    2018-02-01

    A common way to simulate the transport and spread of pollutants in the atmosphere is via stochastic Lagrangian dispersion models. Mathematically, these models describe turbulent transport processes with stochastic differential equations (SDEs). The computational bottleneck is the Monte Carlo algorithm, which simulates the motion of a large number of model particles in a turbulent velocity field; for each particle, a trajectory is calculated with a numerical timestepping method. Choosing an efficient numerical method is particularly important in operational emergency-response applications, such as tracking radioactive clouds from nuclear accidents or predicting the impact of volcanic ash clouds on international aviation, where accurate and timely predictions are essential. In this paper, we investigate the application of the Multilevel Monte Carlo (MLMC) method to simulate the propagation of particles in a representative one-dimensional dispersion scenario in the atmospheric boundary layer. MLMC can be shown to result in asymptotically superior computational complexity and reduced computational cost when compared to the Standard Monte Carlo (StMC) method, which is currently used in atmospheric dispersion modelling. To reduce the absolute cost of the method also in the non-asymptotic regime, it is equally important to choose the best possible numerical timestepping method on each level. To investigate this, we also compare the standard symplectic Euler method, which is used in many operational models, with two improved timestepping algorithms based on SDE splitting methods.

  15. Advanced numerical methods for three dimensional two-phase flow calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toumi, I.; Caruge, D.

    1997-07-01

    This paper is devoted to new numerical methods developed for both one and three dimensional two-phase flow calculations. These methods are finite volume numerical methods and are based on the use of Approximate Riemann Solvers concepts to define convective fluxes versus mean cell quantities. The first part of the paper presents the numerical method for a one dimensional hyperbolic two-fluid model including differential terms as added mass and interface pressure. This numerical solution scheme makes use of the Riemann problem solution to define backward and forward differencing to approximate spatial derivatives. The construction of this approximate Riemann solver uses anmore » extension of Roe`s method that has been successfully used to solve gas dynamic equations. As far as the two-fluid model is hyperbolic, this numerical method seems very efficient for the numerical solution of two-phase flow problems. The scheme was applied both to shock tube problems and to standard tests for two-fluid computer codes. The second part describes the numerical method in the three dimensional case. The authors discuss also some improvements performed to obtain a fully implicit solution method that provides fast running steady state calculations. Such a scheme is not implemented in a thermal-hydraulic computer code devoted to 3-D steady-state and transient computations. Some results obtained for Pressurised Water Reactors concerning upper plenum calculations and a steady state flow in the core with rod bow effect evaluation are presented. In practice these new numerical methods have proved to be stable on non staggered grids and capable of generating accurate non oscillating solutions for two-phase flow calculations.« less

  16. Comparison of Speed-Up Over Hills Derived from Wind-Tunnel Experiments, Wind-Loading Standards, and Numerical Modelling

    NASA Astrophysics Data System (ADS)

    Safaei Pirooz, Amir A.; Flay, Richard G. J.

    2018-03-01

    We evaluate the accuracy of the speed-up provided in several wind-loading standards by comparison with wind-tunnel measurements and numerical predictions, which are carried out at a nominal scale of 1:500 and full-scale, respectively. Airflow over two- and three-dimensional bell-shaped hills is numerically modelled using the Reynolds-averaged Navier-Stokes method with a pressure-driven atmospheric boundary layer and three different turbulence models. Investigated in detail are the effects of grid size on the speed-up and flow separation, as well as the resulting uncertainties in the numerical simulations. Good agreement is obtained between the numerical prediction of speed-up, as well as the wake region size and location, with that according to large-eddy simulations and the wind-tunnel results. The numerical results demonstrate the ability to predict the airflow over a hill with good accuracy with considerably less computational time than for large-eddy simulation. Numerical simulations for a three-dimensional hill show that the speed-up and the wake region decrease significantly when compared with the flow over two-dimensional hills due to the secondary flow around three-dimensional hills. Different hill slopes and shapes are simulated numerically to investigate the effect of hill profile on the speed-up. In comparison with more peaked hill crests, flat-topped hills have a lower speed-up at the crest up to heights of about half the hill height, for which none of the standards gives entirely satisfactory values of speed-up. Overall, the latest versions of the National Building Code of Canada and the Australian and New Zealand Standard give the best predictions of wind speed over isolated hills.

  17. Use of benefit-cost analysis in establishing Federal radiation protection standards: a review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, L.E.

    1979-10-01

    This paper complements other work which has evaluated the cost impacts of radiation standards on the nuclear industry. It focuses on the approaches to valuation of the health and safety benefits of radiation standards and the actual and appropriate processes of benefit-cost comparison. A brief historical review of the rationale(s) for the levels of radiation standards prior to 1970 is given. The Nuclear Regulatory Commission (NRC) established numerical design objectives for light water reactors (LWRs). The process of establishing these numerical design criteria below the radiation protection standards set in 10 CFR 20 is reviewed. EPA's 40 CFR 190 environmentalmore » standards for the uranium fuel cycle have lower values than NRC's radiation protection standards in 10 CFR 20. The task of allocating EPA's 40 CFR 190 standards to the various portions of the fuel cycle was left to the implementing agency, NRC. So whether or not EPA's standards for the uranium fuel cycle are more stringent for LWRs than NRC's numerical design objectives depends on how EPA's standards are implemented by NRC. In setting the numerical levels in Appendix I to 10 CFR 50 and 40 CFR 190 NRC and EPA, respectively, focused on the costs of compliance with various levels of radiation control. A major portion of the paper is devoted to a review and critique of the available methods for valuing health and safety benefits. All current approaches try to estimate a constant value of life and use this to vaue the expected number of lives saved. This paper argues that it is more appropriate to seek a value of a reduction in risks to health and life that varies with the extent of these risks. Additional research to do this is recommended. (DC)« less

  18. Dual-mode nested search method for categorical uncertain multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Tang, Long; Wang, Hu

    2016-10-01

    Categorical multi-objective optimization is an important issue involved in many matching design problems. Non-numerical variables and their uncertainty are the major challenges of such optimizations. Therefore, this article proposes a dual-mode nested search (DMNS) method. In the outer layer, kriging metamodels are established using standard regular simplex mapping (SRSM) from categorical candidates to numerical values. Assisted by the metamodels, a k-cluster-based intelligent sampling strategy is developed to search Pareto frontier points. The inner layer uses an interval number method to model the uncertainty of categorical candidates. To improve the efficiency, a multi-feature convergent optimization via most-promising-area stochastic search (MFCOMPASS) is proposed to determine the bounds of objectives. Finally, typical numerical examples are employed to demonstrate the effectiveness of the proposed DMNS method.

  19. Thermal Property Measurement of Semiconductor Melt using Modified Laser Flash Method

    NASA Technical Reports Server (NTRS)

    Lin, Bochuan; Zhu, Shen; Ban, Heng; Li, Chao; Scripa, Rosalla N.; Su, Ching-Hua; Lehoczky, Sandor L.

    2003-01-01

    This study further developed standard laser flash method to measure multiple thermal properties of semiconductor melts. The modified method can determine thermal diffusivity, thermal conductivity, and specific heat capacity of the melt simultaneously. The transient heat transfer process in the melt and its quartz container was numerically studied in detail. A fitting procedure based on numerical simulation results and the least root-mean-square error fitting to the experimental data was used to extract the values of specific heat capacity, thermal conductivity and thermal diffusivity. This modified method is a step forward from the standard laser flash method, which is usually used to measure thermal diffusivity of solids. The result for tellurium (Te) at 873 K: specific heat capacity 300.2 Joules per kilogram K, thermal conductivity 3.50 Watts per meter K, thermal diffusivity 2.04 x 10(exp -6) square meters per second, are within the range reported in literature. The uncertainty analysis showed the quantitative effect of sample geometry, transient temperature measured, and the energy of the laser pulse.

  20. Fast algorithms for Quadrature by Expansion I: Globally valid expansions

    NASA Astrophysics Data System (ADS)

    Rachh, Manas; Klöckner, Andreas; O'Neil, Michael

    2017-09-01

    The use of integral equation methods for the efficient numerical solution of PDE boundary value problems requires two main tools: quadrature rules for the evaluation of layer potential integral operators with singular kernels, and fast algorithms for solving the resulting dense linear systems. Classically, these tools were developed separately. In this work, we present a unified numerical scheme based on coupling Quadrature by Expansion, a recent quadrature method, to a customized Fast Multipole Method (FMM) for the Helmholtz equation in two dimensions. The method allows the evaluation of layer potentials in linear-time complexity, anywhere in space, with a uniform, user-chosen level of accuracy as a black-box computational method. Providing this capability requires geometric and algorithmic considerations beyond the needs of standard FMMs as well as careful consideration of the accuracy of multipole translations. We illustrate the speed and accuracy of our method with various numerical examples.

  1. Structural characterization and numerical simulations of flow properties of standard and reservoir carbonate rocks using micro-tomography

    NASA Astrophysics Data System (ADS)

    Islam, Amina; Chevalier, Sylvie; Sassi, Mohamed

    2018-04-01

    With advances in imaging techniques and computational power, Digital Rock Physics (DRP) is becoming an increasingly popular tool to characterize reservoir samples and determine their internal structure and flow properties. In this work, we present the details for imaging, segmentation, as well as numerical simulation of single-phase flow through a standard homogenous Silurian dolomite core plug sample as well as a heterogeneous sample from a carbonate reservoir. We develop a procedure that integrates experimental results into the segmentation step to calibrate the porosity. We also look into using two different numerical tools for the simulation; namely Avizo Fire Xlab Hydro that solves the Stokes' equations via the finite volume method and Palabos that solves the same equations using the Lattice Boltzmann Method. Representative Elementary Volume (REV) and isotropy studies are conducted on the two samples and we show how DRP can be a useful tool to characterize rock properties that are time consuming and costly to obtain experimentally.

  2. System Simulation by Recursive Feedback: Coupling a Set of Stand-Alone Subsystem Simulations

    NASA Technical Reports Server (NTRS)

    Nixon, D. D.

    2001-01-01

    Conventional construction of digital dynamic system simulations often involves collecting differential equations that model each subsystem, arran g them to a standard form, and obtaining their numerical gin solution as a single coupled, total-system simultaneous set. Simulation by numerical coupling of independent stand-alone subsimulations is a fundamentally different approach that is attractive because, among other things, the architecture naturally facilitates high fidelity, broad scope, and discipline independence. Recursive feedback is defined and discussed as a candidate approach to multidiscipline dynamic system simulation by numerical coupling of self-contained, single-discipline subsystem simulations. A satellite motion example containing three subsystems (orbit dynamics, attitude dynamics, and aerodynamics) has been defined and constructed using this approach. Conventional solution methods are used in the subsystem simulations. Distributed and centralized implementations of coupling have been considered. Numerical results are evaluated by direct comparison with a standard total-system, simultaneous-solution approach.

  3. From stochastic processes to numerical methods: A new scheme for solving reaction subdiffusion fractional partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angstmann, C.N.; Donnelly, I.C.; Henry, B.I., E-mail: B.Henry@unsw.edu.au

    We have introduced a new explicit numerical method, based on a discrete stochastic process, for solving a class of fractional partial differential equations that model reaction subdiffusion. The scheme is derived from the master equations for the evolution of the probability density of a sum of discrete time random walks. We show that the diffusion limit of the master equations recovers the fractional partial differential equation of interest. This limiting procedure guarantees the consistency of the numerical scheme. The positivity of the solution and stability results are simply obtained, provided that the underlying process is well posed. We also showmore » that the method can be applied to standard reaction–diffusion equations. This work highlights the broader applicability of using discrete stochastic processes to provide numerical schemes for partial differential equations, including fractional partial differential equations.« less

  4. Method Development and Monitoring of Cyanotoxins in Water (ACS Central Presentation)

    EPA Science Inventory

    Increasing occurrence of cyanobacterial harmful algal blooms (HABs) in ambient waters has become a worldwide concern. Numerous cyanotoxins can be produced during HAB events which are toxic to animals and humans. Validated standardized methods that are rugged, selective and sensit...

  5. Peelle's pertinent puzzle using the Monte Carlo technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawano, Toshihiko; Talou, Patrick; Burr, Thomas

    2009-01-01

    We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, andmore » if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.« less

  6. New insight in spiral drawing analysis methods - Application to action tremor quantification.

    PubMed

    Legrand, André Pierre; Rivals, Isabelle; Richard, Aliénor; Apartis, Emmanuelle; Roze, Emmanuel; Vidailhet, Marie; Meunier, Sabine; Hainque, Elodie

    2017-10-01

    Spiral drawing is one of the standard tests used to assess tremor severity for the clinical evaluation of medical treatments. Tremor severity is estimated through visual rating of the drawings by movement disorders experts. Different approaches based on the mathematical signal analysis of the recorded spiral drawings were proposed to replace this rater dependent estimate. The objective of the present study is to propose new numerical methods and to evaluate them in terms of agreement with visual rating and reproducibility. Series of spiral drawings of patients with essential tremor were visually rated by a board of experts. In addition to the usual velocity analysis, three new numerical methods were tested and compared, namely static and dynamic unraveling, and empirical mode decomposition. The reproducibility of both visual and numerical ratings was estimated, and their agreement was evaluated. The statistical analysis demonstrated excellent agreement between visual and numerical ratings, and more reproducible results with numerical methods than with visual ratings. The velocity method and the new numerical methods are in good agreement. Among the latter, static and dynamic unravelling both display a smaller dispersion and are easier for automatic analysis. The reliable scores obtained through the proposed numerical methods allow considering that their implementation on a digitized tablet, be it connected with a computer or independent, provides an efficient automatic tool for tremor severity assessment. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  7. A Christoffel function weighted least squares algorithm for collocation approximations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayan, Akil; Jakeman, John D.; Zhou, Tao

    Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less

  8. A Christoffel function weighted least squares algorithm for collocation approximations

    DOE PAGES

    Narayan, Akil; Jakeman, John D.; Zhou, Tao

    2016-11-28

    Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less

  9. Tempest - Efficient Computation of Atmospheric Flows Using High-Order Local Discretization Methods

    NASA Astrophysics Data System (ADS)

    Ullrich, P. A.; Guerra, J. E.

    2014-12-01

    The Tempest Framework composes several compact numerical methods to easily facilitate intercomparison of atmospheric flow calculations on the sphere and in rectangular domains. This framework includes the implementations of Spectral Elements, Discontinuous Galerkin, Flux Reconstruction, and Hybrid Finite Element methods with the goal of achieving optimal accuracy in the solution of atmospheric problems. Several advantages of this approach are discussed such as: improved pressure gradient calculation, numerical stability by vertical/horizontal splitting, arbitrary order of accuracy, etc. The local numerical discretization allows for high performance parallel computation and efficient inclusion of parameterizations. These techniques are used in conjunction with a non-conformal, locally refined, cubed-sphere grid for global simulations and standard Cartesian grids for simulations at the mesoscale. A complete implementation of the methods described is demonstrated in a non-hydrostatic setting.

  10. Discrete conservation laws and the convergence of long time simulations of the mkdv equation

    NASA Astrophysics Data System (ADS)

    Gorria, C.; Alejo, M. A.; Vega, L.

    2013-02-01

    Pseudospectral collocation methods and finite difference methods have been used for approximating an important family of soliton like solutions of the mKdV equation. These solutions present a structural instability which make difficult to approximate their evolution in long time intervals with enough accuracy. The standard numerical methods do not guarantee the convergence to the proper solution of the initial value problem and often fail by approaching solutions associated to different initial conditions. In this frame the numerical schemes that preserve the discrete invariants related to some conservation laws of this equation produce better results than the methods which only take care of a high consistency order. Pseudospectral spatial discretization appear as the most robust of the numerical methods, but finite difference schemes are useful in order to analyze the rule played by the conservation of the invariants in the convergence.

  11. Analysis of Indonesian educational system standard with KSIM cross-impact method

    NASA Astrophysics Data System (ADS)

    Arridjal, F.; Aldila, D.; Bustamam, A.

    2017-07-01

    The Result of The Programme of International Student Assessment (PISA) on 2012 shows that Indonesia is on 64'th position from 65 countries in Mathematics Mean Score. The 2013 Learning Curve Mapping, Indonesia is included in the 10th category of countries with the lowest performance on cognitive skills aspect, i.e. 37'th position from 40 countries. Competency is built by 3 aspects, one of them is cognitive aspect. The low result of mapping on cognitive aspect, describe the low of graduate competences as an output of Indonesia National Education System (INES). INES adopting a concept Eight Educational System Standards (EESS), one of them is graduate competency standard which connected directly with Indonesia's students. This research aims is to model INES by using KSIM cross-impact. Linear regression models of EESS constructed using the accreditation national data of Senior High Schools in Indonesia. The results then interpreted as impact value on the construction of KSIM cross-impact INES. The construction is used to analyze the interaction of EESS and doing numerical simulation for possible public policy in the education sector, i.e. stimulate the growth of education staff standard, content, process and infrastructure. All simulations of public policy has been done with 2 methods i.e with a multiplier impact method and with constant intervention method. From numerical simulation result, it is shown that stimulate the growth standard of content in the construction KSIM cross-impact EESS is the best option for public policy to maximize the growth of graduate competency standard.

  12. HEASD PM RESEARCH METHODS: PARTICLE METHODS EVALUATION AND DEVELOPMENT

    EPA Science Inventory

    The FRM developed by NERL forms the backbone of the EPA's national monitoring strategy. It is the measurement that defines attainment of the new standard. However, the agency has numerous other needs in assessing the physical and chemical characteristics of ambient fine particl...

  13. Galerkin-collocation domain decomposition method for arbitrary binary black holes

    NASA Astrophysics Data System (ADS)

    Barreto, W.; Clemente, P. C. M.; de Oliveira, H. P.; Rodriguez-Mueller, B.

    2018-05-01

    We present a new computational framework for the Galerkin-collocation method for double domain in the context of ADM 3 +1 approach in numerical relativity. This work enables us to perform high resolution calculations for initial sets of two arbitrary black holes. We use the Bowen-York method for binary systems and the puncture method to solve the Hamiltonian constraint. The nonlinear numerical code solves the set of equations for the spectral modes using the standard Newton-Raphson method, LU decomposition and Gaussian quadratures. We show convergence of our code for the conformal factor and the ADM mass. Thus, we display features of the conformal factor for different masses, spins and linear momenta.

  14. Calculation of far-field scattering from nonspherical particles using a geometrical optics approach

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.

    1991-01-01

    A numerical method was developed using geometrical optics to predict far-field optical scattering from particles that are symmetric about the optic axis. The diffractive component of scattering is calculated and combined with the reflective and refractive components to give the total scattering pattern. The phase terms of the scattered light are calculated as well. Verification of the method was achieved by assuming a spherical particle and comparing the results to Mie scattering theory. Agreement with the Mie theory was excellent in the forward-scattering direction. However, small-amplitude oscillations near the rainbow regions were not observed using the numerical method. Numerical data from spheroidal particles and hemispherical particles are also presented. The use of hemispherical particles as a calibration standard for intensity-type optical particle-sizing instruments is discussed.

  15. Improved Predictions of Drug-Drug Interactions Mediated by Time-Dependent Inhibition of CYP3A.

    PubMed

    Yadav, Jaydeep; Korzekwa, Ken; Nagar, Swati

    2018-05-07

    Time-dependent inactivation (TDI) of cytochrome P450s (CYPs) is a leading cause of clinical drug-drug interactions (DDIs). Current methods tend to overpredict DDIs. In this study, a numerical approach was used to model complex CYP3A TDI in human-liver microsomes. The inhibitors evaluated included troleandomycin (TAO), erythromycin (ERY), verapamil (VER), and diltiazem (DTZ) along with the primary metabolites N-demethyl erythromycin (NDE), norverapamil (NV), and N-desmethyl diltiazem (NDD). The complexities incorporated into the models included multiple-binding kinetics, quasi-irreversible inactivation, sequential metabolism, inhibitor depletion, and membrane partitioning. The resulting inactivation parameters were incorporated into static in vitro-in vivo correlation (IVIVC) models to predict clinical DDIs. For 77 clinically observed DDIs, with a hepatic-CYP3A-synthesis-rate constant of 0.000 146 min -1 , the average fold difference between the observed and predicted DDIs was 3.17 for the standard replot method and 1.45 for the numerical method. Similar results were obtained using a synthesis-rate constant of 0.000 32 min -1 . These results suggest that numerical methods can successfully model complex in vitro TDI kinetics and that the resulting DDI predictions are more accurate than those obtained with the standard replot approach.

  16. Analysis of Time Filters in Multistep Methods

    NASA Astrophysics Data System (ADS)

    Hurl, Nicholas

    Geophysical ow simulations have evolved sophisticated implicit-explicit time stepping methods (based on fast-slow wave splittings) followed by time filters to control any unstable models that result. Time filters are modular and parallel. Their effect on stability of the overall process has been tested in numerous simulations, but never analyzed. Stability is proven herein for the Crank-Nicolson Leapfrog (CNLF) method with the Robert-Asselin (RA) time filter and for the Crank-Nicolson Leapfrog method with the Robert-Asselin-Williams (RAW) time filter for systems by energy methods. We derive an equivalent multistep method for CNLF+RA and CNLF+RAW and stability regions are obtained. The time step restriction for energy stability of CNLF+RA is smaller than CNLF and CNLF+RAW time step restriction is even smaller. Numerical tests find that RA and RAW add numerical dissipation. This thesis also shows that all modes of the Crank-Nicolson Leap Frog (CNLF) method are asymptotically stable under the standard timestep condition.

  17. Mixed-RKDG Finite Element Methods for the 2-D Hydrodynamic Model for Semiconductor Device Simulation

    DOE PAGES

    Chen, Zhangxin; Cockburn, Bernardo; Jerome, Joseph W.; ...

    1995-01-01

    In this paper we introduce a new method for numerically solving the equations of the hydrodynamic model for semiconductor devices in two space dimensions. The method combines a standard mixed finite element method, used to obtain directly an approximation to the electric field, with the so-called Runge-Kutta Discontinuous Galerkin (RKDG) method, originally devised for numerically solving multi-dimensional hyperbolic systems of conservation laws, which is applied here to the convective part of the equations. Numerical simulations showing the performance of the new method are displayed, and the results compared with those obtained by using Essentially Nonoscillatory (ENO) finite difference schemes. Frommore » the perspective of device modeling, these methods are robust, since they are capable of encompassing broad parameter ranges, including those for which shock formation is possible. The simulations presented here are for Gallium Arsenide at room temperature, but we have tested them much more generally with considerable success.« less

  18. Importance sampling with imperfect cloning for the computation of generalized Lyapunov exponents

    NASA Astrophysics Data System (ADS)

    Anteneodo, Celia; Camargo, Sabrina; Vallejos, Raúl O.

    2017-12-01

    We revisit the numerical calculation of generalized Lyapunov exponents, L (q ) , in deterministic dynamical systems. The standard method consists of adding noise to the dynamics in order to use importance sampling algorithms. Then L (q ) is obtained by taking the limit noise-amplitude → 0 after the calculation. We focus on a particular method that involves periodic cloning and pruning of a set of trajectories. However, instead of considering a noisy dynamics, we implement an imperfect (noisy) cloning. This alternative method is compared with the standard one and, when possible, with analytical results. As a workbench we use the asymmetric tent map, the standard map, and a system of coupled symplectic maps. The general conclusion of this study is that the imperfect-cloning method performs as well as the standard one, with the advantage of preserving the deterministic dynamics.

  19. A no-gold-standard technique for objective assessment of quantitative nuclear-medicine imaging methods

    PubMed Central

    Jha, Abhinav K; Caffo, Brian; Frey, Eric C

    2016-01-01

    The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation. PMID:26982626

  20. A no-gold-standard technique for objective assessment of quantitative nuclear-medicine imaging methods.

    PubMed

    Jha, Abhinav K; Caffo, Brian; Frey, Eric C

    2016-04-07

    The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation.

  1. An improved numerical method to compute neutron/gamma deexcitation cascades starting from a high spin state

    DOE PAGES

    Regnier, D.; Litaize, O.; Serot, O.

    2015-12-23

    Numerous nuclear processes involve the deexcitation of a compound nucleus through the emission of several neutrons, gamma-rays and/or conversion electrons. The characteristics of such a deexcitation are commonly derived from a total statistical framework often called “Hauser–Feshbach” method. In this work, we highlight a numerical limitation of this kind of method in the case of the deexcitation of a high spin initial state. To circumvent this issue, an improved technique called the Fluctuating Structure Properties (FSP) method is presented. Two FSP algorithms are derived and benchmarked on the calculation of the total radiative width for a thermal neutron capture onmore » 238U. We compare the standard method with these FSP algorithms for the prediction of particle multiplicities in the deexcitation of a high spin level of 143Ba. The gamma multiplicity turns out to be very sensitive to the numerical method. The bias between the two techniques can reach 1.5 γγ/cascade. Lastly, the uncertainty of these calculations coming from the lack of knowledge on nuclear structure is estimated via the FSP method.« less

  2. Efficient Implementation of the Invariant Imbedding T-Matrix Method and the Separation of Variables Method Applied to Large Nonspherical Inhomogeneous Particles

    NASA Technical Reports Server (NTRS)

    Bi, Lei; Yang, Ping; Kattawar, George W.; Mishchenko, Michael I.

    2012-01-01

    Three terms, ''Waterman's T-matrix method'', ''extended boundary condition method (EBCM)'', and ''null field method'', have been interchangeable in the literature to indicate a method based on surface integral equations to calculate the T-matrix. Unlike the previous method, the invariant imbedding method (IIM) calculates the T-matrix by the use of a volume integral equation. In addition, the standard separation of variables method (SOV) can be applied to compute the T-matrix of a sphere centered at the origin of the coordinate system and having a maximal radius such that the sphere remains inscribed within a nonspherical particle. This study explores the feasibility of a numerical combination of the IIM and the SOV, hereafter referred to as the IIMþSOV method, for computing the single-scattering properties of nonspherical dielectric particles, which are, in general, inhomogeneous. The IIMþSOV method is shown to be capable of solving light-scattering problems for large nonspherical particles where the standard EBCM fails to converge. The IIMþSOV method is flexible and applicable to inhomogeneous particles and aggregated nonspherical particles (overlapped circumscribed spheres) representing a challenge to the standard superposition T-matrix method. The IIMþSOV computational program, developed in this study, is validated against EBCM simulated spheroid and cylinder cases with excellent numerical agreement (up to four decimal places). In addition, solutions for cylinders with large aspect ratios, inhomogeneous particles, and two-particle systems are compared with results from discrete dipole approximation (DDA) computations, and comparisons with the improved geometric-optics method (IGOM) are found to be quite encouraging.

  3. An efficient numerical technique for calculating thermal spreading resistance

    NASA Technical Reports Server (NTRS)

    Gale, E. H., Jr.

    1977-01-01

    An efficient numerical technique for solving the equations resulting from finite difference analyses of fields governed by Poisson's equation is presented. The method is direct (noniterative)and the computer work required varies with the square of the order of the coefficient matrix. The computational work required varies with the cube of this order for standard inversion techniques, e.g., Gaussian elimination, Jordan, Doolittle, etc.

  4. Computation of Nonlinear Backscattering Using a High-Order Numerical Method

    NASA Technical Reports Server (NTRS)

    Fibich, G.; Ilan, B.; Tsynkov, S.

    2001-01-01

    The nonlinear Schrodinger equation (NLS) is the standard model for propagation of intense laser beams in Kerr media. The NLS is derived from the nonlinear Helmholtz equation (NLH) by employing the paraxial approximation and neglecting the backscattered waves. In this study we use a fourth-order finite-difference method supplemented by special two-way artificial boundary conditions (ABCs) to solve the NLH as a boundary value problem. Our numerical methodology allows for a direct comparison of the NLH and NLS models and for an accurate quantitative assessment of the backscattered signal.

  5. A Comparison of Some Difference Schemes for a Parabolic Problem of Zero-Coupon Bond Pricing

    NASA Astrophysics Data System (ADS)

    Chernogorova, Tatiana; Vulkov, Lubin

    2009-11-01

    This paper describes a comparison of some numerical methods for solving a convection-diffusion equation subjected by dynamical boundary conditions which arises in the zero-coupon bond pricing. The one-dimensional convection-diffusion equation is solved by using difference schemes with weights including standard difference schemes as the monotone Samarskii's scheme, FTCS and Crank-Nicolson methods. The schemes are free of spurious oscillations and satisfy the positivity and maximum principle as demanded for the financial and diffusive solution. Numerical results are compared with analytical solutions.

  6. An Operator-Integration-Factor Splitting (OIFS) method for Incompressible Flows in Moving Domains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patel, Saumil S.; Fischer, Paul F.; Min, Misun

    In this paper, we present a characteristic-based numerical procedure for simulating incompressible flows in domains with moving boundaries. Our approach utilizes an operator-integration-factor splitting technique to help produce an effcient and stable numerical scheme. Using the spectral element method and an arbitrary Lagrangian-Eulerian formulation, we investigate flows where the convective acceleration effects are non-negligible. Several examples, ranging from laminar to turbulent flows, are considered. Comparisons with a standard, semi-implicit time-stepping procedure illustrate the improved performance of the scheme.

  7. A Novel Polygonal Finite Element Method: Virtual Node Method

    NASA Astrophysics Data System (ADS)

    Tang, X. H.; Zheng, C.; Zhang, J. H.

    2010-05-01

    Polygonal finite element method (PFEM), which can construct shape functions on polygonal elements, provides greater flexibility in mesh generation. However, the non-polynomial form of traditional PFEM, such as Wachspress method and Mean Value method, leads to inexact numerical integration. Since the integration technique for non-polynomial functions is immature. To overcome this shortcoming, a great number of integration points have to be used to obtain sufficiently exact results, which increases computational cost. In this paper, a novel polygonal finite element method is proposed and called as virtual node method (VNM). The features of present method can be list as: (1) It is a PFEM with polynomial form. Thereby, Hammer integral and Gauss integral can be naturally used to obtain exact numerical integration; (2) Shape functions of VNM satisfy all the requirements of finite element method. To test the performance of VNM, intensive numerical tests are carried out. It found that, in standard patch test, VNM can achieve significantly better results than Wachspress method and Mean Value method. Moreover, it is observed that VNM can achieve better results than triangular 3-node elements in the accuracy test.

  8. Localization of a variational particle smoother

    NASA Astrophysics Data System (ADS)

    Morzfeld, M.; Hodyss, D.; Poterjoy, J.

    2017-12-01

    Given the success of 4D-variational methods (4D-Var) in numerical weather prediction,and recent efforts to merge ensemble Kalman filters with 4D-Var,we consider a method to merge particle methods and 4D-Var.This leads us to revisit variational particle smoothers (varPS).We study the collapse of varPS in high-dimensional problemsand show how it can be prevented by weight-localization.We test varPS on the Lorenz'96 model of dimensionsn=40, n=400, and n=2000.In our numerical experiments, weight localization prevents the collapse of the varPS,and we note that the varPS yields results comparable to ensemble formulations of 4D-variational methods,while it outperforms EnKF with tuned localization and inflation,and the localized standard particle filter.Additional numerical experiments suggest that using localized weights in varPS may not yield significant advantages over unweighted or linearizedsolutions in near-Gaussian problems.

  9. Composite scheme using localized relaxation with non-standard finite difference method for hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Kumar, Vivek; Raghurama Rao, S. V.

    2008-04-01

    Non-standard finite difference methods (NSFDM) introduced by Mickens [ Non-standard Finite Difference Models of Differential Equations, World Scientific, Singapore, 1994] are interesting alternatives to the traditional finite difference and finite volume methods. When applied to linear hyperbolic conservation laws, these methods reproduce exact solutions. In this paper, the NSFDM is first extended to hyperbolic systems of conservation laws, by a novel utilization of the decoupled equations using characteristic variables. In the second part of this paper, the NSFDM is studied for its efficacy in application to nonlinear scalar hyperbolic conservation laws. The original NSFDMs introduced by Mickens (1994) were not in conservation form, which is an important feature in capturing discontinuities at the right locations. Mickens [Construction and analysis of a non-standard finite difference scheme for the Burgers-Fisher equations, Journal of Sound and Vibration 257 (4) (2002) 791-797] recently introduced a NSFDM in conservative form. This method captures the shock waves exactly, without any numerical dissipation. In this paper, this algorithm is tested for the case of expansion waves with sonic points and is found to generate unphysical expansion shocks. As a remedy to this defect, we use the strategy of composite schemes [R. Liska, B. Wendroff, Composite schemes for conservation laws, SIAM Journal of Numerical Analysis 35 (6) (1998) 2250-2271] in which the accurate NSFDM is used as the basic scheme and localized relaxation NSFDM is used as the supporting scheme which acts like a filter. Relaxation schemes introduced by Jin and Xin [The relaxation schemes for systems of conservation laws in arbitrary space dimensions, Communications in Pure and Applied Mathematics 48 (1995) 235-276] are based on relaxation systems which replace the nonlinear hyperbolic conservation laws by a semi-linear system with a stiff relaxation term. The relaxation parameter ( λ) is chosen locally on the three point stencil of grid which makes the proposed method more efficient. This composite scheme overcomes the problem of unphysical expansion shocks and captures the shock waves with an accuracy better than the upwind relaxation scheme, as demonstrated by the test cases, together with comparisons with popular numerical methods like Roe scheme and ENO schemes.

  10. A Kernel-free Boundary Integral Method for Elliptic Boundary Value Problems ⋆

    PubMed Central

    Ying, Wenjun; Henriquez, Craig S.

    2013-01-01

    This paper presents a class of kernel-free boundary integral (KFBI) methods for general elliptic boundary value problems (BVPs). The boundary integral equations reformulated from the BVPs are solved iteratively with the GMRES method. During the iteration, the boundary and volume integrals involving Green's functions are approximated by structured grid-based numerical solutions, which avoids the need to know the analytical expressions of Green's functions. The KFBI method assumes that the larger regular domain, which embeds the original complex domain, can be easily partitioned into a hierarchy of structured grids so that fast elliptic solvers such as the fast Fourier transform (FFT) based Poisson/Helmholtz solvers or those based on geometric multigrid iterations are applicable. The structured grid-based solutions are obtained with standard finite difference method (FDM) or finite element method (FEM), where the right hand side of the resulting linear system is appropriately modified at irregular grid nodes to recover the formal accuracy of the underlying numerical scheme. Numerical results demonstrating the efficiency and accuracy of the KFBI methods are presented. It is observed that the number of GM-RES iterations used by the method for solving isotropic and moderately anisotropic BVPs is independent of the sizes of the grids that are employed to approximate the boundary and volume integrals. With the standard second-order FEMs and FDMs, the KFBI method shows a second-order convergence rate in accuracy for all of the tested Dirichlet/Neumann BVPs when the anisotropy of the diffusion tensor is not too strong. PMID:23519600

  11. An efficient numerical method for the solution of the problem of elasticity for 3D-homogeneous elastic medium with cracks and inclusions

    NASA Astrophysics Data System (ADS)

    Kanaun, S.; Markov, A.

    2017-06-01

    An efficient numerical method for solution of static problems of elasticity for an infinite homogeneous medium containing inhomogeneities (cracks and inclusions) is developed. Finite number of heterogeneous inclusions and planar parallel cracks of arbitrary shapes is considered. The problem is reduced to a system of surface integral equations for crack opening vectors and volume integral equations for stress tensors inside the inclusions. For the numerical solution of these equations, a class of Gaussian approximating functions is used. The method based on these functions is mesh free. For such functions, the elements of the matrix of the discretized system are combinations of explicit analytical functions and five standard 1D-integrals that can be tabulated. Thus, the numerical integration is excluded from the construction of the matrix of the discretized problem. For regular node grids, the matrix of the discretized system has Toeplitz's properties, and Fast Fourier Transform technique can be used for calculation matrix-vector products of such matrices.

  12. Numerical Computation of Subsonic Conical Diffuser Flows with Nonuniform Turbulent Inlet Conditions

    DTIC Science & Technology

    1977-09-01

    Gauss - Seidel Point Iteration Method . . . . . . . . . . . . . . . 7.0 FACTORS AFFECTING THE RATE OF CONVERGENCE OF THE POINT...can be solved in several ways. For simplicity, a standard Gauss - Seidel iteration method is used to obtain the solution . The method updates the...FACTORS AFFECTING THE RATE OF CONVERGENCE OF THE POINT ITERATION ,ŘETHOD The advantage of using the Gauss - Seidel point iteration method to

  13. Spectral collocation for multiparameter eigenvalue problems arising from separable boundary value problems

    NASA Astrophysics Data System (ADS)

    Plestenjak, Bor; Gheorghiu, Călin I.; Hochstenbach, Michiel E.

    2015-10-01

    In numerous science and engineering applications a partial differential equation has to be solved on some fairly regular domain that allows the use of the method of separation of variables. In several orthogonal coordinate systems separation of variables applied to the Helmholtz, Laplace, or Schrödinger equation leads to a multiparameter eigenvalue problem (MEP); important cases include Mathieu's system, Lamé's system, and a system of spheroidal wave functions. Although multiparameter approaches are exploited occasionally to solve such equations numerically, MEPs remain less well known, and the variety of available numerical methods is not wide. The classical approach of discretizing the equations using standard finite differences leads to algebraic MEPs with large matrices, which are difficult to solve efficiently. The aim of this paper is to change this perspective. We show that by combining spectral collocation methods and new efficient numerical methods for algebraic MEPs it is possible to solve such problems both very efficiently and accurately. We improve on several previous results available in the literature, and also present a MATLAB toolbox for solving a wide range of problems.

  14. Numerical solutions of the semiclassical Boltzmann ellipsoidal-statistical kinetic model equation

    PubMed Central

    Yang, Jaw-Yen; Yan, Chin-Yuan; Huang, Juan-Chen; Li, Zhihui

    2014-01-01

    Computations of rarefied gas dynamical flows governed by the semiclassical Boltzmann ellipsoidal-statistical (ES) kinetic model equation using an accurate numerical method are presented. The semiclassical ES model was derived through the maximum entropy principle and conserves not only the mass, momentum and energy, but also contains additional higher order moments that differ from the standard quantum distributions. A different decoding procedure to obtain the necessary parameters for determining the ES distribution is also devised. The numerical method in phase space combines the discrete-ordinate method in momentum space and the high-resolution shock capturing method in physical space. Numerical solutions of two-dimensional Riemann problems for two configurations covering various degrees of rarefaction are presented and various contours of the quantities unique to this new model are illustrated. When the relaxation time becomes very small, the main flow features a display similar to that of ideal quantum gas dynamics, and the present solutions are found to be consistent with existing calculations for classical gas. The effect of a parameter that permits an adjustable Prandtl number in the flow is also studied. PMID:25104904

  15. Generating partially correlated noise—A comparison of methods

    PubMed Central

    Hartmann, William M.; Cho, Yun Jin

    2011-01-01

    There are three standard methods for generating two channels of partially correlated noise: the two-generator method, the three-generator method, and the symmetric-generator method. These methods allow an experimenter to specify a target cross correlation between the two channels, but actual generated noises show statistical variability around the target value. Numerical experiments were done to compare the variability for those methods as a function of the number of degrees of freedom. The results of the experiments quantify the stimulus uncertainty in diverse binaural psychoacoustical experiments: incoherence detection, perceived auditory source width, envelopment, noise localization∕lateralization, and the masking level difference. The numerical experiments found that when the elemental generators have unequal powers, the different methods all have similar variability. When the powers are constrained to be equal, the symmetric-generator method has much smaller variability than the other two. PMID:21786899

  16. Towards standard testbeds for numerical relativity

    NASA Astrophysics Data System (ADS)

    Alcubierre, Miguel; Allen, Gabrielle; Bona, Carles; Fiske, David; Goodale, Tom; Guzmán, F. Siddhartha; Hawke, Ian; Hawley, Scott H.; Husa, Sascha; Koppitz, Michael; Lechner, Christiane; Pollney, Denis; Rideout, David; Salgado, Marcelo; Schnetter, Erik; Seidel, Edward; Shinkai, Hisa-aki; Shoemaker, Deirdre; Szilágyi, Béla; Takahashi, Ryoji; Winicour, Jeff

    2004-01-01

    In recent years, many different numerical evolution schemes for Einstein's equations have been proposed to address stability and accuracy problems that have plagued the numerical relativity community for decades. Some of these approaches have been tested on different spacetimes, and conclusions have been drawn based on these tests. However, differences in results originate from many sources, including not only formulations of the equations, but also gauges, boundary conditions, numerical methods and so on. We propose to build up a suite of standardized testbeds for comparing approaches to the numerical evolution of Einstein's equations that are designed to both probe their strengths and weaknesses and to separate out different effects, and their causes, seen in the results. We discuss general design principles of suitable testbeds, and we present an initial round of simple tests with periodic boundary conditions. This is a pivotal first step towards building a suite of testbeds to serve the numerical relativists and researchers from related fields who wish to assess the capabilities of numerical relativity codes. We present some examples of how these tests can be quite effective in revealing various limitations of different approaches, and illustrating their differences. The tests are presently limited to vacuum spacetimes, can be run on modest computational resources and can be used with many different approaches used in the relativity community.

  17. Domain Decomposition Algorithms for First-Order System Least Squares Methods

    NASA Technical Reports Server (NTRS)

    Pavarino, Luca F.

    1996-01-01

    Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.

  18. LASER BEAMS: On alternative methods for measuring the radius and propagation ratio of axially symmetric laser beams

    NASA Astrophysics Data System (ADS)

    Dementjev, Aleksandr S.; Jovaisa, A.; Silko, Galina; Ciegis, Raimondas

    2005-11-01

    Based on the developed efficient numerical methods for calculating the propagation of light beams, the alternative methods for measuring the beam radius and propagation ratio proposed in the international standard ISO 11146 are analysed. The specific calculations of the alternative beam propagation ratios Mi2 performed for a number of test beams with a complicated spatial structure showed that the correlation coefficients ci used in the international standard do not establish the universal one-to-one relation between the alternative propagation ratios Mi2 and invariant propagation ratios Mσ2 found by the method of moments.

  19. Petascale turbulence simulation using a highly parallel fast multipole method on GPUs

    NASA Astrophysics Data System (ADS)

    Yokota, Rio; Barba, L. A.; Narumi, Tetsu; Yasuoka, Kenji

    2013-03-01

    This paper reports large-scale direct numerical simulations of homogeneous-isotropic fluid turbulence, achieving sustained performance of 1.08 petaflop/s on GPU hardware using single precision. The simulations use a vortex particle method to solve the Navier-Stokes equations, with a highly parallel fast multipole method (FMM) as numerical engine, and match the current record in mesh size for this application, a cube of 40963 computational points solved with a spectral method. The standard numerical approach used in this field is the pseudo-spectral method, relying on the FFT algorithm as the numerical engine. The particle-based simulations presented in this paper quantitatively match the kinetic energy spectrum obtained with a pseudo-spectral method, using a trusted code. In terms of parallel performance, weak scaling results show the FMM-based vortex method achieving 74% parallel efficiency on 4096 processes (one GPU per MPI process, 3 GPUs per node of the TSUBAME-2.0 system). The FFT-based spectral method is able to achieve just 14% parallel efficiency on the same number of MPI processes (using only CPU cores), due to the all-to-all communication pattern of the FFT algorithm. The calculation time for one time step was 108 s for the vortex method and 154 s for the spectral method, under these conditions. Computing with 69 billion particles, this work exceeds by an order of magnitude the largest vortex-method calculations to date.

  20. IMPROVED PERFORMANCES IN SUBSONIC FLOWS OF AN SPH SCHEME WITH GRADIENTS ESTIMATED USING AN INTEGRAL APPROACH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valdarnini, R., E-mail: valda@sissa.it

    In this paper, we present results from a series of hydrodynamical tests aimed at validating the performance of a smoothed particle hydrodynamics (SPH) formulation in which gradients are derived from an integral approach. We specifically investigate the code behavior with subsonic flows, where it is well known that zeroth-order inconsistencies present in standard SPH make it particularly problematic to correctly model the fluid dynamics. In particular, we consider the Gresho–Chan vortex problem, the growth of Kelvin–Helmholtz instabilities, the statistics of driven subsonic turbulence and the cold Keplerian disk problem. We compare simulation results for the different tests with those obtained,more » for the same initial conditions, using standard SPH. We also compare the results with the corresponding ones obtained previously with other numerical methods, such as codes based on a moving-mesh scheme or Godunov-type Lagrangian meshless methods. We quantify code performances by introducing error norms and spectral properties of the particle distribution, in a way similar to what was done in other works. We find that the new SPH formulation exhibits strongly reduced gradient errors and outperforms standard SPH in all of the tests considered. In fact, in terms of accuracy, we find good agreement between the simulation results of the new scheme and those produced using other recently proposed numerical schemes. These findings suggest that the proposed method can be successfully applied for many astrophysical problems in which the presence of subsonic flows previously limited the use of SPH, with the new scheme now being competitive in these regimes with other numerical methods.« less

  1. A 3D finite-difference BiCG iterative solver with the Fourier-Jacobi preconditioner for the anisotropic EIT/EEG forward problem.

    PubMed

    Turovets, Sergei; Volkov, Vasily; Zherdetsky, Aleksej; Prakonina, Alena; Malony, Allen D

    2014-01-01

    The Electrical Impedance Tomography (EIT) and electroencephalography (EEG) forward problems in anisotropic inhomogeneous media like the human head belongs to the class of the three-dimensional boundary value problems for elliptic equations with mixed derivatives. We introduce and explore the performance of several new promising numerical techniques, which seem to be more suitable for solving these problems. The proposed numerical schemes combine the fictitious domain approach together with the finite-difference method and the optimally preconditioned Conjugate Gradient- (CG-) type iterative method for treatment of the discrete model. The numerical scheme includes the standard operations of summation and multiplication of sparse matrices and vector, as well as FFT, making it easy to implement and eligible for the effective parallel implementation. Some typical use cases for the EIT/EEG problems are considered demonstrating high efficiency of the proposed numerical technique.

  2. Data Assimilation by delay-coordinate nudging

    NASA Astrophysics Data System (ADS)

    Pazo, Diego; Lopez, Juan Manuel; Carrassi, Alberto

    2016-04-01

    A new nudging method for data assimilation, delay-coordinate nudging, is presented. Delay-coordinate nudging makes explicit use of present and past observations in the formulation of the forcing driving the model evolution at each time-step. Numerical experiments with a low order chaotic system show that the new method systematically outperforms standard nudging in different model and observational scenarios, also when using an un-optimized formulation of the delay-nudging coefficients. A connection between the optimal delay and the dominant Lyapunov exponent of the dynamics is found based on heuristic arguments and is confirmed by the numerical results, providing a guideline for the practical implementation of the algorithm. Delay-coordinate nudging preserves the easiness of implementation, the intuitive functioning and the reduced computational cost of the standard nudging, making it a potential alternative especially in the field of seasonal-to-decadal predictions with large Earth system models that limit the use of more sophisticated data assimilation procedures.

  3. Linear stability analysis of detonations via numerical computation and dynamic mode decomposition

    NASA Astrophysics Data System (ADS)

    Kabanov, Dmitry I.; Kasimov, Aslan R.

    2018-03-01

    We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.

  4. A High Order Discontinuous Galerkin Method for 2D Incompressible Flows

    NASA Technical Reports Server (NTRS)

    Liu, Jia-Guo; Shu, Chi-Wang

    1999-01-01

    In this paper we introduce a high order discontinuous Galerkin method for two dimensional incompressible flow in vorticity streamfunction formulation. The momentum equation is treated explicitly, utilizing the efficiency of the discontinuous Galerkin method The streamfunction is obtained by a standard Poisson solver using continuous finite elements. There is a natural matching between these two finite element spaces, since the normal component of the velocity field is continuous across element boundaries. This allows for a correct upwinding gluing in the discontinuous Galerkin framework, while still maintaining total energy conservation with no numerical dissipation and total enstrophy stability The method is suitable for inviscid or high Reynolds number flows. Optimal error estimates are proven and verified by numerical experiments.

  5. High performance computation of radiative transfer equation using the finite element method

    NASA Astrophysics Data System (ADS)

    Badri, M. A.; Jolivet, P.; Rousseau, B.; Favennec, Y.

    2018-05-01

    This article deals with an efficient strategy for numerically simulating radiative transfer phenomena using distributed computing. The finite element method alongside the discrete ordinate method is used for spatio-angular discretization of the monochromatic steady-state radiative transfer equation in an anisotropically scattering media. Two very different methods of parallelization, angular and spatial decomposition methods, are presented. To do so, the finite element method is used in a vectorial way. A detailed comparison of scalability, performance, and efficiency on thousands of processors is established for two- and three-dimensional heterogeneous test cases. Timings show that both algorithms scale well when using proper preconditioners. It is also observed that our angular decomposition scheme outperforms our domain decomposition method. Overall, we perform numerical simulations at scales that were previously unattainable by standard radiative transfer equation solvers.

  6. Numerical noise prediction in fluid machinery

    NASA Astrophysics Data System (ADS)

    Pantle, Iris; Magagnato, Franco; Gabi, Martin

    2005-09-01

    Numerical methods successively became important in the design and optimization of fluid machinery. However, as noise emission is considered, one can hardly find standardized prediction methods combining flow and acoustical optimization. Several numerical field methods for sound calculations have been developed. Due to the complexity of the considered flow, approaches must be chosen to avoid exhaustive computing. In this contribution the noise of a simple propeller is investigated. The configurations of the calculations comply with an existing experimental setup chosen for evaluation. The used in-house CFD solver SPARC contains an acoustic module based on Ffowcs Williams-Hawkings Acoustic Analogy. From the flow results of the time dependent Large Eddy Simulation the time dependent acoustic sources are extracted and given to the acoustic module where relevant sound pressure levels are calculated. The difficulties, which arise while proceeding from open to closed rotors and from gas to liquid are discussed.

  7. A review of contemporary methods for the presentation of scientific uncertainty.

    PubMed

    Makinson, K A; Hamby, D M; Edwards, J A

    2012-12-01

    Graphic methods for displaying uncertainty are often the most concise and informative way to communicate abstract concepts. Presentation methods currently in use for the display and interpretation of scientific uncertainty are reviewed. Numerous subjective and objective uncertainty display methods are presented, including qualitative assessments, node and arrow diagrams, standard statistical methods, box-and-whisker plots,robustness and opportunity functions, contribution indexes, probability density functions, cumulative distribution functions, and graphical likelihood functions.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadid, John Nicolas; Fish, Jacob; Waisman, Haim

    Two heuristic strategies intended to enhance the performance of the generalized global basis (GGB) method [H. Waisman, J. Fish, R.S. Tuminaro, J. Shadid, The Generalized Global Basis (GGB) method, International Journal for Numerical Methods in Engineering 61(8), 1243-1269] applied to nonlinear systems are presented. The standard GGB accelerates a multigrid scheme by an additional coarse grid correction that filters out slowly converging modes. This correction requires a potentially costly eigen calculation. This paper considers reusing previously computed eigenspace information. The GGB? scheme enriches the prolongation operator with new eigenvectors while the modified method (MGGB) selectively reuses the same prolongation. Bothmore » methods use the criteria of principal angles between subspaces spanned between the previous and current prolongation operators. Numerical examples clearly indicate significant time savings in particular for the MGGB scheme.« less

  9. Location Modification Factors for Potential Dose Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, Sandra F.; Barnett, J. Matthew

    2017-01-01

    A Department of Energy facility must comply with the National Emission Standard for Hazardous Air Pollutants for radioactive air emissions. The standard is an effective dose of less than 0.1 mSv yr-1 to the maximum public receptor. Additionally, a lower dose level may be assigned to a specific emission point in a State issued permit. A method to efficiently estimate the expected dose for future emissions is described. This method is most appropriately applied to a research facility with several emission points with generally low emission levels of numerous isotopes.

  10. Generating partially correlated noise--a comparison of methods.

    PubMed

    Hartmann, William M; Cho, Yun Jin

    2011-07-01

    There are three standard methods for generating two channels of partially correlated noise: the two-generator method, the three-generator method, and the symmetric-generator method. These methods allow an experimenter to specify a target cross correlation between the two channels, but actual generated noises show statistical variability around the target value. Numerical experiments were done to compare the variability for those methods as a function of the number of degrees of freedom. The results of the experiments quantify the stimulus uncertainty in diverse binaural psychoacoustical experiments: incoherence detection, perceived auditory source width, envelopment, noise localization/lateralization, and the masking level difference. The numerical experiments found that when the elemental generators have unequal powers, the different methods all have similar variability. When the powers are constrained to be equal, the symmetric-generator method has much smaller variability than the other two. © 2011 Acoustical Society of America

  11. Biodiesel

    USDA-ARS?s Scientific Manuscript database

    Biodiesel is a renewable alternative to petrodiesel that is prepared from plant oils or animal fats. Biodiesel is prepared via transesterification and the resulting fuel properties must be compliant with international fuel standards such as ASTM D6751 and EN 14214. Numerous catalysts, methods, and l...

  12. Standardized Pearson type 3 density function area tables

    NASA Technical Reports Server (NTRS)

    Cohen, A. C.; Helm, F. R.; Sugg, M.

    1971-01-01

    Tables constituting extension of similar tables published in 1936 are presented in report form. Single and triple parameter gamma functions are discussed. Report tables should interest persons concerned with development and use of numerical analysis and evaluation methods.

  13. RIO: a new computational framework for accurate initial data of binary black holes

    NASA Astrophysics Data System (ADS)

    Barreto, W.; Clemente, P. C. M.; de Oliveira, H. P.; Rodriguez-Mueller, B.

    2018-06-01

    We present a computational framework ( Rio) in the ADM 3+1 approach for numerical relativity. This work enables us to carry out high resolution calculations for initial data of two arbitrary black holes. We use the transverse conformal treatment, the Bowen-York and the puncture methods. For the numerical solution of the Hamiltonian constraint we use the domain decomposition and the spectral decomposition of Galerkin-Collocation. The nonlinear numerical code solves the set of equations for the spectral modes using the standard Newton-Raphson method, LU decomposition and Gaussian quadratures. We show the convergence of the Rio code. This code allows for easy deployment of large calculations. We show how the spin of one of the black holes is manifest in the conformal factor.

  14. RESPONSIVENESS OF THE ACTIVITIES OF DAILY LIVING SCALE OF THE KNEE OUTCOME SURVEY AND NUMERIC PAIN RATING SCALE IN PATIENTS WITH PATELLOFEMORAL PAIN

    PubMed Central

    Piva, Sara R.; Gil, Alexandra B.; Moore, Charity G.; Fitzgerald, G. Kelley

    2016-01-01

    Objective To assess internal and external responsiveness of the Activity of Daily Living Scale of the Knee Outcome Survey and Numeric Pain Rating Scale on patients with patellofemoral pain. Design One group pre-post design. Subjects A total of 60 individuals with patellofemoral pain (33 women; mean age 29.9 (standard deviation 9.6) years). Methods The Activity of Daily Living Scale and the Numeric Pain Rating Scale were assessed before and after 8 weeks of physical therapy program. Patients completed a global rating of change scale at the end of therapy. The standardized effect size, Guyatt responsiveness index, and the minimum clinical important difference were calculated. Results Standardized effect size of the Activity of Daily Living Scale was 0.63, Guyatt responsiveness index was 1.4, area under the curve was 0.83 (95% confidence interval: 0.72, 0.94), and the minimum clinical important difference corresponded to an increase of 7.1 percentile points. Standardized effect size of the Numeric Pain Rating Scale was 0.72, Guyatt responsiveness index was 2.2, area under the curve was 0.80 (95% confidence interval: 0.70, 0.92), and the minimum clinical important difference corresponded to a decrease of 1.16 points. Conclusion Information from this study may be helpful to therapists when evaluating the effectiveness of rehabilitation intervention on physical function and pain, and to power future clinical trials on patients with patellofemoral pain. PMID:19229444

  15. Bayesian-based estimation of acoustic surface impedance: Finite difference frequency domain approach.

    PubMed

    Bockman, Alexander; Fackler, Cameron; Xiang, Ning

    2015-04-01

    Acoustic performance for an interior requires an accurate description of the boundary materials' surface acoustic impedance. Analytical methods may be applied to a small class of test geometries, but inverse numerical methods provide greater flexibility. The parameter estimation problem requires minimizing prediction vice observed acoustic field pressure. The Bayesian-network sampling approach presented here mitigates other methods' susceptibility to noise inherent to the experiment, model, and numerics. A geometry agnostic method is developed here and its parameter estimation performance is demonstrated for an air-backed micro-perforated panel in an impedance tube. Good agreement is found with predictions from the ISO standard two-microphone, impedance-tube method, and a theoretical model for the material. Data by-products exclusive to a Bayesian approach are analyzed to assess sensitivity of the method to nuisance parameters.

  16. Some results on numerical methods for hyperbolic conservation laws

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang Huanan.

    1989-01-01

    This dissertation contains some results on the numerical solutions of hyperbolic conservation laws. (1) The author introduced an artificial compression method as a correction to the basic ENO schemes. The method successfully prevents contact discontinuities from being smeared. This is achieved by increasing the slopes of the ENO reconstructions in such a way that the essentially non-oscillatory property of the schemes is kept. He analyzes the non-oscillatory property of the new artificial compression method by applying it to the UNO scheme which is a second order accurate ENO scheme, and proves that the resulting scheme is indeed non-oscillatory. Extensive 1-Dmore » numerical results and some preliminary 2-D ones are provided to show the strong performance of the method. (2) He combines the ENO schemes and the centered difference schemes into self-adjusting hybrid schemes which will be called the localized ENO schemes. At or near the jumps, he uses the ENO schemes with the field by field decompositions, otherwise he simply uses the centered difference schemes without the field by field decompositions. The method involves a new interpolation analysis. In the numerical experiments on several standard test problems, the quality of the numerical results of this method is close to that of the pure ENO results. The localized ENO schemes can be equipped with the above artificial compression method. In this way, he dramatically improves the resolutions of the contact discontinuities at very little additional costs. (3) He introduces a space-time mesh refinement method for time dependent problems.« less

  17. A Gift for Every Teacher in Every Language: The Video Library of Classroom Practices

    ERIC Educational Resources Information Center

    Duncan, Greg

    2008-01-01

    Many new teachers come into the teaching profession without the benefit of methods courses and numerous veteran teachers find that their long-ago methods courses do not have high transference to today's language agenda or today's students. Understanding that teachers want to teach to standards and appeal to students' interests in learning…

  18. On the Latent Regression Model of Item Response Theory. Research Report. ETS RR-07-12

    ERIC Educational Resources Information Center

    Antal, Tamás

    2007-01-01

    Full account of the latent regression model for the National Assessment of Educational Progress is given. The treatment includes derivation of the EM algorithm, Newton-Raphson method, and the asymptotic standard errors. The paper also features the use of the adaptive Gauss-Hermite numerical integration method as a basic tool to evaluate…

  19. The p-Version of the Finite Element Method for Domains with Corners and for Infinite Domains

    DTIC Science & Technology

    1988-11-01

    Finite Element Method, Prenticw-Hall, 1973. [24] Szabo, B. A. :PROBE : The Theoretical Manual(Release 1.0), Noetic Tech. Cor. St Louis, MO., 1985...National Bureau of Standards. " To be an international center of study and research for foreign students in numerical mathematics who are supported by

  20. Analysis and computation of a least-squares method for consistent mesh tying

    DOE PAGES

    Day, David; Bochev, Pavel

    2007-07-10

    We report in the finite element method, a standard approach to mesh tying is to apply Lagrange multipliers. If the interface is curved, however, discretization generally leads to adjoining surfaces that do not coincide spatially. Straightforward Lagrange multiplier methods lead to discrete formulations failing a first-order patch test [T.A. Laursen, M.W. Heinstein, Consistent mesh-tying methods for topologically distinct discretized surfaces in non-linear solid mechanics, Internat. J. Numer. Methods Eng. 57 (2003) 1197–1242]. This paper presents a theoretical and computational study of a least-squares method for mesh tying [P. Bochev, D.M. Day, A least-squares method for consistent mesh tying, Internat. J.more » Numer. Anal. Modeling 4 (2007) 342–352], applied to the partial differential equation -∇ 2φ+αφ=f. We prove optimal convergence rates for domains represented as overlapping subdomains and show that the least-squares method passes a patch test of the order of the finite element space by construction. To apply the method to subdomain configurations with gaps and overlaps we use interface perturbations to eliminate the gaps. Finally, theoretical error estimates are illustrated by numerical experiments.« less

  1. Proposal of a micromagnetic standard problem for ferromagnetic resonance simulations

    NASA Astrophysics Data System (ADS)

    Baker, Alexander; Beg, Marijan; Ashton, Gregory; Albert, Maximilian; Chernyshenko, Dmitri; Wang, Weiwei; Zhang, Shilei; Bisotti, Marc-Antonio; Franchin, Matteo; Hu, Chun Lian; Stamps, Robert; Hesjedal, Thorsten; Fangohr, Hans

    2017-01-01

    Nowadays, micromagnetic simulations are a common tool for studying a wide range of different magnetic phenomena, including the ferromagnetic resonance. A technique for evaluating reliability and validity of different micromagnetic simulation tools is the simulation of proposed standard problems. We propose a new standard problem by providing a detailed specification and analysis of a sufficiently simple problem. By analyzing the magnetization dynamics in a thin permalloy square sample, triggered by a well defined excitation, we obtain the ferromagnetic resonance spectrum and identify the resonance modes via Fourier transform. Simulations are performed using both finite difference and finite element numerical methods, with OOMMF and Nmag simulators, respectively. We report the effects of initial conditions and simulation parameters on the character of the observed resonance modes for this standard problem. We provide detailed instructions and code to assist in using the results for evaluation of new simulator tools, and to help with numerical calculation of ferromagnetic resonance spectra and modes in general.

  2. A highly parallel multigrid-like method for the solution of the Euler equations

    NASA Technical Reports Server (NTRS)

    Tuminaro, Ray S.

    1989-01-01

    We consider a highly parallel multigrid-like method for the solution of the two dimensional steady Euler equations. The new method, introduced as filtering multigrid, is similar to a standard multigrid scheme in that convergence on the finest grid is accelerated by iterations on coarser grids. In the filtering method, however, additional fine grid subproblems are processed concurrently with coarse grid computations to further accelerate convergence. These additional problems are obtained by splitting the residual into a smooth and an oscillatory component. The smooth component is then used to form a coarse grid problem (similar to standard multigrid) while the oscillatory component is used for a fine grid subproblem. The primary advantage in the filtering approach is that fewer iterations are required and that most of the additional work per iteration can be performed in parallel with the standard coarse grid computations. We generalize the filtering algorithm to a version suitable for nonlinear problems. We emphasize that this generalization is conceptually straight-forward and relatively easy to implement. In particular, no explicit linearization (e.g., formation of Jacobians) needs to be performed (similar to the FAS multigrid approach). We illustrate the nonlinear version by applying it to the Euler equations, and presenting numerical results. Finally, a performance evaluation is made based on execution time models and convergence information obtained from numerical experiments.

  3. 43 CFR 3809.202 - Under what conditions will BLM defer to State regulation of operations?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... standards on a provision-by-provision basis to determine— (i) Whether non-numerical State standards are functionally equivalent to BLM counterparts; and (ii) Whether numerical State standards are the same as corresponding numerical BLM standards, except that State review and approval time frames do not have to be the...

  4. 43 CFR 3809.202 - Under what conditions will BLM defer to State regulation of operations?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... standards on a provision-by-provision basis to determine— (i) Whether non-numerical State standards are functionally equivalent to BLM counterparts; and (ii) Whether numerical State standards are the same as corresponding numerical BLM standards, except that State review and approval time frames do not have to be the...

  5. 43 CFR 3809.202 - Under what conditions will BLM defer to State regulation of operations?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... standards on a provision-by-provision basis to determine— (i) Whether non-numerical State standards are functionally equivalent to BLM counterparts; and (ii) Whether numerical State standards are the same as corresponding numerical BLM standards, except that State review and approval time frames do not have to be the...

  6. 43 CFR 3809.202 - Under what conditions will BLM defer to State regulation of operations?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... standards on a provision-by-provision basis to determine— (i) Whether non-numerical State standards are functionally equivalent to BLM counterparts; and (ii) Whether numerical State standards are the same as corresponding numerical BLM standards, except that State review and approval time frames do not have to be the...

  7. Microfluidics and numerical simulation as methods for standardization of zebrafish sperm cell activation

    PubMed Central

    Scherr, Thomas; Knapp, Gerald L.; Guitreau, Amy; Park, Daniel Sang-Won; Tiersch, Terrence; Nandakumar, Krishnaswamy

    2017-01-01

    Sperm cell activation plays a critical role in a range of biological and engineering processes, from fertilization to cryopreservation protocol evaluation. Across a range of species, ionic and osmotic effects have been discovered that lead to activation. Sperm cells of zebrafish (Danio rerio) initiate motility in a hypoosmotic environment. In this study, we employ a microfluidic mixer for the purpose of rapidly diluting the extracellular medium to initiate the onset of cell motility. The use of a microchannel offers a rapid and reproducible mixing profile throughout the device. This greatly reduces variability from trial to trial relative to the current methods of analysis. Coupling these experiments with numerical simulations, we were able to investigate the dynamics of intracellular osmolality as each cell moves along its path through the micromixer. Our results suggest that intracellular osmolality, and hence intracellular ion concentration, only slightly decreases, contrary to the common thought that larger changes in these parameters are required for activation. Utilizing this framework, microfluidics for controlled extracellular environments and associated numerical modeling, has practical applicability in standardizing high-throughput aquatic sperm activation, and more fundamentally, investigations of the intracellular environment leading to motility. PMID:26026298

  8. Forcing scheme analysis for the axisymmetric lattice Boltzmann method under incompressible limit.

    PubMed

    Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Chen, Jie; Yin, Linmao; Chew, Jia Wei

    2017-04-01

    Because the standard lattice Boltzmann (LB) method is proposed for Cartesian Navier-Stokes (NS) equations, additional source terms are necessary in the axisymmetric LB method for representing the axisymmetric effects. Therefore, the accuracy and applicability of the axisymmetric LB models depend on the forcing schemes adopted for discretization of the source terms. In this study, three forcing schemes, namely, the trapezium rule based scheme, the direct forcing scheme, and the semi-implicit centered scheme, are analyzed theoretically by investigating their derived macroscopic equations in the diffusive scale. Particularly, the finite difference interpretation of the standard LB method is extended to the LB equations with source terms, and then the accuracy of different forcing schemes is evaluated for the axisymmetric LB method. Theoretical analysis indicates that the discrete lattice effects arising from the direct forcing scheme are part of the truncation error terms and thus would not affect the overall accuracy of the standard LB method with general force term (i.e., only the source terms in the momentum equation are considered), but lead to incorrect macroscopic equations for the axisymmetric LB models. On the other hand, the trapezium rule based scheme and the semi-implicit centered scheme both have the advantage of avoiding the discrete lattice effects and recovering the correct macroscopic equations. Numerical tests applied for validating the theoretical analysis show that both the numerical stability and the accuracy of the axisymmetric LB simulations are affected by the direct forcing scheme, which indicate that forcing schemes free of the discrete lattice effects are necessary for the axisymmetric LB method.

  9. Pulse cleaning flow models and numerical computation of candle ceramic filters.

    PubMed

    Tian, Gui-shan; Ma, Zhen-ji; Zhang, Xin-yi; Xu, Ting-xiang

    2002-04-01

    Analytical and numerical computed models are developed for reverse pulse cleaning system of candle ceramic filters. A standard turbulent model is demonstrated suitably to the designing computation of reverse pulse cleaning system from the experimental and one-dimensional computational result. The computed results can be used to guide the designing of reverse pulse cleaning system, which is optimum Venturi geometry. From the computed results, the general conclusions and the designing methods are obtained.

  10. Improvement of Frequency Locking Algorithm for Atomic Frequency Standards

    NASA Astrophysics Data System (ADS)

    Park, Young-Ho; Kang, Hoonsoo; Heyong Lee, Soo; Eon Park, Sang; Lee, Jong Koo; Lee, Ho Seong; Kwon, Taeg Yong

    2010-09-01

    The authors describe a novel method of frequency locking algorithm for atomic frequency standards. The new algorithm for locking the microwave frequency to the Ramsey resonance is compared with the old one that had been employed in the cesium atomic beam frequency standards such as NIST-7 and KRISS-1. Numerical simulations for testing the performance of the algorithm show that the new method has a noise filtering performance superior to the old one by a factor of 1.2 for the flicker signal noise and 1.4 for random-walk signal noise. The new algorithm can readily be used to enhance the frequency stability for a digital servo employing the slow square wave frequency modulation.

  11. Thermal Modeling of the Injection of Standard and Thermally Insulated Cored Wire

    NASA Astrophysics Data System (ADS)

    Castro-Cedeno, E.-I.; Jardy, A.; Carré, A.; Gerardin, S.; Bellot, J. P.

    2017-12-01

    Cored wire injection is a widespread method used to perform alloying additions during ferrous and non-ferrous liquid metal treatment. The wire consists of a metal casing that is tightly wrapped around a core of material; the casing delays the release of the material as the wire is immersed into the melt. This method of addition presents advantages such as higher repeatability and yield of cored material with respect to bulk additions. Experimental and numerical work has been performed by several authors on the subject of alloy additions, spherical and cylindrical geometries being mainly considered. Surprisingly this has not been the case for cored wire, where the reported experimental or numerical studies are scarce. This work presents a 1-D finite volume numerical model aimed for the simulation of the thermal phenomena which occurs when the wire is injected into a liquid metal bath. It is currently being used as a design tool for the conception of new types of cored wire. A parametric study on the effect of injection velocity and steel casing thickness for an Al cored wire immersed into a steel melt at 1863 K (1590 °C) is presented. The standard single casing wire is further compared against a wire with multiple casings. Numerical results show that over a certain range of injection velocities, the core contents' release is delayed in the multiple casing when compared to a single casing wire.

  12. Acceleration of Linear Finite-Difference Poisson-Boltzmann Methods on Graphics Processing Units.

    PubMed

    Qi, Ruxi; Botello-Smith, Wesley M; Luo, Ray

    2017-07-11

    Electrostatic interactions play crucial roles in biophysical processes such as protein folding and molecular recognition. Poisson-Boltzmann equation (PBE)-based models have emerged as widely used in modeling these important processes. Though great efforts have been put into developing efficient PBE numerical models, challenges still remain due to the high dimensionality of typical biomolecular systems. In this study, we implemented and analyzed commonly used linear PBE solvers for the ever-improving graphics processing units (GPU) for biomolecular simulations, including both standard and preconditioned conjugate gradient (CG) solvers with several alternative preconditioners. Our implementation utilizes the standard Nvidia CUDA libraries cuSPARSE, cuBLAS, and CUSP. Extensive tests show that good numerical accuracy can be achieved given that the single precision is often used for numerical applications on GPU platforms. The optimal GPU performance was observed with the Jacobi-preconditioned CG solver, with a significant speedup over standard CG solver on CPU in our diversified test cases. Our analysis further shows that different matrix storage formats also considerably affect the efficiency of different linear PBE solvers on GPU, with the diagonal format best suited for our standard finite-difference linear systems. Further efficiency may be possible with matrix-free operations and integrated grid stencil setup specifically tailored for the banded matrices in PBE-specific linear systems.

  13. Numerical analysis of a main crack interactions with micro-defects/inhomogeneities using two-scale generalized/extended finite element method

    NASA Astrophysics Data System (ADS)

    Malekan, Mohammad; Barros, Felício B.

    2017-12-01

    Generalized or extended finite element method (G/XFEM) models the crack by enriching functions of partition of unity type with discontinuous functions that represent well the physical behavior of the problem. However, this enrichment functions are not available for all problem types. Thus, one can use numerically-built (global-local) enrichment functions to have a better approximate procedure. This paper investigates the effects of micro-defects/inhomogeneities on a main crack behavior by modeling the micro-defects/inhomogeneities in the local problem using a two-scale G/XFEM. The global-local enrichment functions are influenced by the micro-defects/inhomogeneities from the local problem and thus change the approximate solution of the global problem with the main crack. This approach is presented in detail by solving three different linear elastic fracture mechanics problems for different cases: two plane stress and a Reissner-Mindlin plate problems. The numerical results obtained with the two-scale G/XFEM are compared with the reference solutions from the analytical, numerical solution using standard G/XFEM method and ABAQUS as well, and from the literature.

  14. An ultra-accurate numerical method in the design of liquid phononic crystals with hard inclusion

    NASA Astrophysics Data System (ADS)

    Li, Eric; He, Z. C.; Wang, G.; Liu, G. R.

    2017-12-01

    The phononics crystals (PCs) are periodic man-made composite materials. In this paper, a mass-redistributed finite element method (MR-FEM) is formulated to study the wave propagation within liquid PCs with hard inclusion. With a perfect balance between stiffness and mass in the MR-FEM model, the dispersion error of longitudinal wave is minimized by redistribution of mass. Such tuning can be easily achieved by adjusting the parameter r that controls the location of integration points of mass matrix. More importantly, the property of mass conservation in the MR-FEM model indicates that the locations of integration points inside or outside the element are immaterial. Four numerical examples are studied in this work, including liquid PCs with cross and circle hard inclusions, different size of inclusion and defect. Compared with standard finite element method, the numerical results have verified the accuracy and effectiveness of MR-FEM. The proposed MR-FEM is a unique and innovative numerical approach with its outstanding features, which has strong potentials to study the stress wave within multi-physics PCs.

  15. Effect of fire-induced damage on the uniaxial strength characteristics of solid timber: A numerical study

    NASA Astrophysics Data System (ADS)

    Hopkin, D. J.; El-Rimawi, J.; Lennon, T.; Silberschmidt, V. V.

    2011-07-01

    The advent of the structural Eurocodes has allowed civil engineers to be more creative in the design of structures exposed to fire. Rather than rely upon regulatory guidance and prescriptive methods engineers are now able to use such codes to design buildings on the basis of credible design fires rather than accepted unrealistic standard-fire time-temperature curves. Through this process safer and more efficient structural designs are achievable. The key development in enabling performance-based fire design is the emergence of validated numerical models capable of predicting the mechanical response of a whole building or sub-assemblies at elevated temperature. In such a way, efficiency savings have been achieved in the design of steel, concrete and composite structures. However, at present, due to a combination of limited fundamental research and restrictions in the UK National Annex to the timber Eurocode, the design of fire-exposed timber structures using numerical modelling techniques is not generally undertaken. The 'fire design' of timber structures is covered in Eurocode 5 part 1.2 (EN 1995-1-2). In this code there is an advanced calculation annex (Annex B) intended to facilitate the implementation of numerical models in the design of fire-exposed timber structures. The properties contained in the code can, at present, only be applied to standard-fire exposure conditions. This is due to existing limitations related to the available thermal properties which are only valid for standard fire exposure. In an attempt to overcome this barrier the authors have proposed a 'modified conductivity model' (MCM) for determining the temperature of timber structural elements during the heating phase of non-standard fires. This is briefly outlined in this paper. In addition, in a further study, the MCM has been implemented in a coupled thermo-mechanical analysis of uniaxially loaded timber elements exposed to non-standard fires. The finite element package DIANA was adopted with plane-strain elements assuming two-dimensional heat flow. The resulting predictions of failure time for given levels of load are discussed and compared with the simplified 'effective cross section' method presented in EN 1995-1-2.

  16. Accurate computation of gravitational field of a tesseroid

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2018-02-01

    We developed an accurate method to compute the gravitational field of a tesseroid. The method numerically integrates a surface integral representation of the gravitational potential of the tesseroid by conditionally splitting its line integration intervals and by using the double exponential quadrature rule. Then, it evaluates the gravitational acceleration vector and the gravity gradient tensor by numerically differentiating the numerically integrated potential. The numerical differentiation is conducted by appropriately switching the central and the single-sided second-order difference formulas with a suitable choice of the test argument displacement. If necessary, the new method is extended to the case of a general tesseroid with the variable density profile, the variable surface height functions, and/or the variable intervals in longitude or in latitude. The new method is capable of computing the gravitational field of the tesseroid independently on the location of the evaluation point, namely whether outside, near the surface of, on the surface of, or inside the tesseroid. The achievable precision is 14-15 digits for the potential, 9-11 digits for the acceleration vector, and 6-8 digits for the gradient tensor in the double precision environment. The correct digits are roughly doubled if employing the quadruple precision computation. The new method provides a reliable procedure to compute the topographic gravitational field, especially that near, on, and below the surface. Also, it could potentially serve as a sure reference to complement and elaborate the existing approaches using the Gauss-Legendre quadrature or other standard methods of numerical integration.

  17. Thermal radiation view factor: Methods, accuracy and computer-aided procedures

    NASA Technical Reports Server (NTRS)

    Kadaba, P. V.

    1982-01-01

    The computer aided thermal analysis programs which predicts the result of predetermined acceptable temperature range prior to stationing of these orbiting equipment in various attitudes with respect to the Sun and the Earth was examined. Complexity of the surface geometries suggests the use of numerical schemes for the determination of these viewfactors. Basic definitions and standard methods which form the basis for various digital computer methods and various numerical methods are presented. The physical model and the mathematical methods on which a number of available programs are built are summarized. The strength and the weaknesses of the methods employed, the accuracy of the calculations and the time required for computations are evaluated. The situations where accuracies are important for energy calculations are identified and methods to save computational times are proposed. Guide to best use of the available programs at several centers and the future choices for efficient use of digital computers are included in the recommendations.

  18. Advancing MODFLOW Applying the Derived Vector Space Method

    NASA Astrophysics Data System (ADS)

    Herrera, G. S.; Herrera, I.; Lemus-García, M.; Hernandez-Garcia, G. D.

    2015-12-01

    The most effective domain decomposition methods (DDM) are non-overlapping DDMs. Recently a new approach, the DVS-framework, based on an innovative discretization method that uses a non-overlapping system of nodes (the derived-nodes), was introduced and developed by I. Herrera et al. [1, 2]. Using the DVS-approach a group of four algorithms, referred to as the 'DVS-algorithms', which fulfill the DDM-paradigm (i.e. the solution of global problems is obtained by resolution of local problems exclusively) has been derived. Such procedures are applicable to any boundary-value problem, or system of such equations, for which a standard discretization method is available and then software with a high degree of parallelization can be constructed. In a parallel talk, in this AGU Fall Meeting, Ismael Herrera will introduce the general DVS methodology. The application of the DVS-algorithms has been demonstrated in the solution of several boundary values problems of interest in Geophysics. Numerical examples for a single-equation, for the cases of symmetric, non-symmetric and indefinite problems were demonstrated before [1,2]. For these problems DVS-algorithms exhibited significantly improved numerical performance with respect to standard versions of DDM algorithms. In view of these results our research group is in the process of applying the DVS method to a widely used simulator for the first time, here we present the advances of the application of this method for the parallelization of MODFLOW. Efficiency results for a group of tests will be presented. References [1] I. Herrera, L.M. de la Cruz and A. Rosas-Medina. Non overlapping discretization methods for partial differential equations, Numer Meth Part D E, (2013). [2] Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)

  19. Characterization of pore structure in cement-based materials using pressurization-depressurization cycling mercury intrusion porosimetry (PDC-MIP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Jian, E-mail: Jian.Zhou@tudelft.n; Ye Guang, E-mail: g.ye@tudelft.n; Magnel Laboratory for Concrete Research, Department of Structural Engineering, Ghent University, Technologiepark-Zwijnaarde 904 B-9052, Ghent

    2010-07-15

    Numerous mercury intrusion porosimetry (MIP) studies have been carried out to investigate the pore structure in cement-based materials. However, the standard MIP often results in an underestimation of large pores and an overestimation of small pores because of its intrinsic limitation. In this paper, an innovative MIP method is developed in order to provide a more accurate estimation of pore size distribution. The new MIP measurements are conducted following a unique mercury intrusion procedure, in which the applied pressure is increased from the minimum to the maximum by repeating pressurization-depressurization cycles instead of a continuous pressurization followed by a continuousmore » depressurization. Accordingly, this method is called pressurization-depressurization cycling MIP (PDC-MIP). By following the PDC-MIP testing sequence, the volumes of the throat pores and the corresponding ink-bottle pores can be determined at every pore size. These values are used to calculate pore size distribution by using the newly developed analysis method. This paper presents an application of PDC-MIP on the investigation of the pore size distribution in cement-based materials. The experimental results of PDC-MIP are compared with those measured by standard MIP. The PDC-MIP is further validated with the other experimental methods and numerical tool, including nitrogen sorption, backscanning electron (BSE) image analysis, Wood's metal intrusion porosimetry (WMIP) and the numerical simulation by the cement hydration model HYMOSTRUC3D.« less

  20. Integral method for the calculation of Hawking radiation in dispersive media. I. Symmetric asymptotics.

    PubMed

    Robertson, Scott; Leonhardt, Ulf

    2014-11-01

    Hawking radiation has become experimentally testable thanks to the many analog systems which mimic the effects of the event horizon on wave propagation. These systems are typically dominated by dispersion and give rise to a numerically soluble and stable ordinary differential equation only if the rest-frame dispersion relation Ω^{2}(k) is a polynomial of relatively low degree. Here we present a new method for the calculation of wave scattering in a one-dimensional medium of arbitrary dispersion. It views the wave equation as an integral equation in Fourier space, which can be solved using standard and efficient numerical techniques.

  1. Computationally efficient method for Fourier transform of highly chirped pulses for laser and parametric amplifier modeling.

    PubMed

    Andrianov, Alexey; Szabo, Aron; Sergeev, Alexander; Kim, Arkady; Chvykov, Vladimir; Kalashnikov, Mikhail

    2016-11-14

    We developed an improved approach to calculate the Fourier transform of signals with arbitrary large quadratic phase which can be efficiently implemented in numerical simulations utilizing Fast Fourier transform. The proposed algorithm significantly reduces the computational cost of Fourier transform of a highly chirped and stretched pulse by splitting it into two separate transforms of almost transform limited pulses, thereby reducing the required grid size roughly by a factor of the pulse stretching. The application of our improved Fourier transform algorithm in the split-step method for numerical modeling of CPA and OPCPA shows excellent agreement with standard algorithms.

  2. Toward unbiased estimations of the statefinder parameters

    NASA Astrophysics Data System (ADS)

    Aviles, Alejandro; Klapp, Jaime; Luongo, Orlando

    2017-09-01

    With the use of simulated supernova catalogs, we show that the statefinder parameters turn out to be poorly and biased estimated by standard cosmography. To this end, we compute their standard deviations and several bias statistics on cosmologies near the concordance model, demonstrating that these are very large, making standard cosmography unsuitable for future and wider compilations of data. To overcome this issue, we propose a new method that consists in introducing the series of the Hubble function into the luminosity distance, instead of considering the usual direct Taylor expansions of the luminosity distance. Moreover, in order to speed up the numerical computations, we estimate the coefficients of our expansions in a hierarchical manner, in which the order of the expansion depends on the redshift of every single piece of data. In addition, we propose two hybrids methods that incorporates standard cosmography at low redshifts. The methods presented here perform better than the standard approach of cosmography both in the errors and bias of the estimated statefinders. We further propose a one-parameter diagnostic to reject non-viable methods in cosmography.

  3. Deming on Education: A View from the Seminar.

    ERIC Educational Resources Information Center

    Holt, Maurice

    1993-01-01

    W. Edwards Deming rejects school-improvement proposals based on formulating higher standards and enforcing them with performance assessments. To Deming, the Bush Administration's America 2000 education strategy is "a horrible example of numerical goals, tests, rewards, but no method." Emphasizing rationalist performance measures…

  4. METHODS FOR MONITORING PUMP-AND-TREAT PERFORMANCE

    EPA Science Inventory

    Since the 1980s, numerous pump-and-treat systems have been constructed to: (1) hydraulically contain contaminated ground water, and/or (2) restore ground-water quality to meet a desired standard such as background quality or Maximum Contaminant Level (MCL) concentrations for drin...

  5. Standardized Radiation Shield Design Methods: 2005 HZETRN

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Tripathi, Ram K.; Badavi, Francis F.; Cucinotta, Francis A.

    2006-01-01

    Research committed by the Langley Research Center through 1995 resulting in the HZETRN code provides the current basis for shield design methods according to NASA STD-3000 (2005). With this new prominence, the database, basic numerical procedures, and algorithms are being re-examined with new methods of verification and validation being implemented to capture a well defined algorithm for engineering design processes to be used in this early development phase of the Bush initiative. This process provides the methodology to transform the 1995 HZETRN research code into the 2005 HZETRN engineering code to be available for these early design processes. In this paper, we will review the basic derivations including new corrections to the codes to insure improved numerical stability and provide benchmarks for code verification.

  6. A numerical algorithm for optimal feedback gains in high dimensional linear quadratic regulator problems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1991-01-01

    A hybrid method for computing the feedback gains in linear quadratic regulator problem is proposed. The method, which combines use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite-dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantages of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed, and numerical evidence of the efficacy of these ideas is presented.

  7. A numerical algorithm for optimal feedback gains in high dimensional LQR problems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1986-01-01

    A hybrid method for computing the feedback gains in linear quadratic regulator problems is proposed. The method, which combines the use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated so as to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantage of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed and numerical evidence of the efficacy of our ideas presented.

  8. The P1-RKDG method for two-dimensional Euler equations of gas dynamics

    NASA Technical Reports Server (NTRS)

    Cockburn, Bernardo; Shu, Chi-Wang

    1991-01-01

    A class of nonlinearly stable Runge-Kutta local projection discontinuous Galerkin (RKDG) finite element methods for conservation laws is investigated. Two dimensional Euler equations for gas dynamics are solved using P1 elements. The generalization of the local projections, which for scalar nonlinear conservation laws was designed to satisfy a local maximum principle, to systems of conservation laws such as the Euler equations of gas dynamics using local characteristic decompositions is discussed. Numerical examples include the standard regular shock reflection problem, the forward facing step problem, and the double Mach reflection problem. These preliminary numerical examples are chosen to show the capacity of the approach to obtain nonlinearly stable results comparable with the modern nonoscillatory finite difference methods.

  9. Reduction of numerical diffusion in three-dimensional vortical flows using a coupled Eulerian/Lagrangian solution procedure

    NASA Technical Reports Server (NTRS)

    Felici, Helene M.; Drela, Mark

    1993-01-01

    A new approach based on the coupling of an Eulerian and a Lagrangian solver, aimed at reducing the numerical diffusion errors of standard Eulerian time-marching finite-volume solvers, is presented. The approach is applied to the computation of the secondary flow in two bent pipes and the flow around a 3D wing. Using convective point markers the Lagrangian approach provides a correction of the basic Eulerian solution. The Eulerian flow in turn integrates in time the Lagrangian state-vector. A comparison of coarse and fine grid Eulerian solutions makes it possible to identify numerical diffusion. It is shown that the Eulerian/Lagrangian approach is an effective method for reducing numerical diffusion errors.

  10. Evaluation of Advanced Stirling Convertor Net Heat Input Correlation Methods Using a Thermal Standard

    NASA Technical Reports Server (NTRS)

    Briggs, Maxwell H.; Schifer, Nicholas A.

    2012-01-01

    The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two high-efficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. In an effort to improve net heat input predictions, numerous tasks have been performed which provided a more accurate value for net heat input into the ASCs, including testing validation hardware, known as the Thermal Standard, to provide a direct comparison to numerical and empirical models used to predict convertor net heat input. This validation hardware provided a comparison for scrutinizing and improving empirical correlations and numerical models of ASC-E2 net heat input. This hardware simulated the characteristics of an ASC-E2 convertor in both an operating and non-operating mode. This paper describes the Thermal Standard testing and the conclusions of the validation effort applied to the empirical correlation methods used by the Radioisotope Power System (RPS) team at NASA Glenn.

  11. An Efficient Numerical Approach for Nonlinear Fokker-Planck equations

    NASA Astrophysics Data System (ADS)

    Otten, Dustin; Vedula, Prakash

    2009-03-01

    Fokker-Planck equations which are nonlinear with respect to their probability densities that occur in many nonequilibrium systems relevant to mean field interaction models, plasmas, classical fermions and bosons can be challenging to solve numerically. To address some underlying challenges in obtaining numerical solutions, we propose a quadrature based moment method for efficient and accurate determination of transient (and stationary) solutions of nonlinear Fokker-Planck equations. In this approach the distribution function is represented as a collection of Dirac delta functions with corresponding quadrature weights and locations, that are in turn determined from constraints based on evolution of generalized moments. Properties of the distribution function can be obtained by solution of transport equations for quadrature weights and locations. We will apply this computational approach to study a wide range of problems, including the Desai-Zwanzig Model (for nonlinear muscular contraction) and multivariate nonlinear Fokker-Planck equations describing classical fermions and bosons, and will also demonstrate good agreement with results obtained from Monte Carlo and other standard numerical methods.

  12. Fast numerics for the spin orbit equation with realistic tidal dissipation and constant eccentricity

    NASA Astrophysics Data System (ADS)

    Bartuccelli, Michele; Deane, Jonathan; Gentile, Guido

    2017-08-01

    We present an algorithm for the rapid numerical integration of a time-periodic ODE with a small dissipation term that is C^1 in the velocity. Such an ODE arises as a model of spin-orbit coupling in a star/planet system, and the motivation for devising a fast algorithm for its solution comes from the desire to estimate probability of capture in various solutions, via Monte Carlo simulation: the integration times are very long, since we are interested in phenomena occurring on timescales of the order of 10^6-10^7 years. The proposed algorithm is based on the high-order Euler method which was described in Bartuccelli et al. (Celest Mech Dyn Astron 121(3):233-260, 2015), and it requires computer algebra to set up the code for its implementation. The payoff is an overall increase in speed by a factor of about 7.5 compared to standard numerical methods. Means for accelerating the purely numerical computation are also discussed.

  13. High-performance modeling of plasma-based acceleration and laser-plasma interactions

    NASA Astrophysics Data System (ADS)

    Vay, Jean-Luc; Blaclard, Guillaume; Godfrey, Brendan; Kirchen, Manuel; Lee, Patrick; Lehe, Remi; Lobet, Mathieu; Vincenti, Henri

    2016-10-01

    Large-scale numerical simulations are essential to the design of plasma-based accelerators and laser-plasma interations for ultra-high intensity (UHI) physics. The electromagnetic Particle-In-Cell (PIC) approach is the method of choice for self-consistent simulations, as it is based on first principles, and captures all kinetic effects, and also scale favorably to many cores on supercomputers. The standard PIC algorithm relies on second-order finite-difference discretization of the Maxwell and Newton-Lorentz equations. We present here novel formulations, based on very high-order pseudo-spectral Maxwell solvers, which enable near-total elimination of the numerical Cherenkov instability and increased accuracy over the standard PIC method for standard laboratory frame and Lorentz boosted frame simulations. We also present the latest implementations in the PIC modules Warp-PICSAR and FBPIC on the Intel Xeon Phi and GPU architectures. Examples of applications will be given on the simulation of laser-plasma accelerators and high-harmonic generation with plasma mirrors. Work supported by US-DOE Contracts DE-AC02-05CH11231 and by the European Commission through the Marie Slowdoska-Curie fellowship PICSSAR Grant Number 624543. Used resources of NERSC.

  14. Steering Quantum Dynamics of a Two-Qubit System via Optimal Bang-Bang Control

    NASA Astrophysics Data System (ADS)

    Hu, Juju; Ke, Qiang; Ji, Yinghua

    2018-02-01

    The optimization of control time for quantum systems has been an important field of control science attracting decades of focus, which is beneficial for efficiency improvement and decoherence suppression caused by the environment. Based on analyzing the advantages and disadvantages of the existing Lyapunov control, using a bang-bang optimal control technique, we investigate the fast state control in a closed two-qubit quantum system, and give three optimized control field design methods. Numerical simulation experiments indicate the effectiveness of the methods. Compared to the standard Lyapunov control or standard bang-bang control method, the optimized control field design methods effectively shorten the state control time and avoid high-frequency oscillation that occurs in bang-bang control.

  15. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  17. A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  18. A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions

    DOE PAGES

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    2017-06-22

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  19. Using SWAT to enhance watershed-based plans to meet numeric water quality standards

    USDA-ARS?s Scientific Manuscript database

    The number of states that have adopted numeric nutrient water-quality standards has increased to 23, up from ten in 1998. One state with both stream and reservoir phosphorus (P) numeric water-quality standards is Oklahoma. There were two primary objectives of this research: (1) determine if Oklaho...

  20. Numerical solutions of ideal quantum gas dynamical flows governed by semiclassical ellipsoidal-statistical distribution.

    PubMed

    Yang, Jaw-Yen; Yan, Chih-Yuan; Diaz, Manuel; Huang, Juan-Chen; Li, Zhihui; Zhang, Hanxin

    2014-01-08

    The ideal quantum gas dynamics as manifested by the semiclassical ellipsoidal-statistical (ES) equilibrium distribution derived in Wu et al. (Wu et al . 2012 Proc. R. Soc. A 468 , 1799-1823 (doi:10.1098/rspa.2011.0673)) is numerically studied for particles of three statistics. This anisotropic ES equilibrium distribution was derived using the maximum entropy principle and conserves the mass, momentum and energy, but differs from the standard Fermi-Dirac or Bose-Einstein distribution. The present numerical method combines the discrete velocity (or momentum) ordinate method in momentum space and the high-resolution shock-capturing method in physical space. A decoding procedure to obtain the necessary parameters for determining the ES distribution is also devised. Computations of two-dimensional Riemann problems are presented, and various contours of the quantities unique to this ES model are illustrated. The main flow features, such as shock waves, expansion waves and slip lines and their complex nonlinear interactions, are depicted and found to be consistent with existing calculations for a classical gas.

  1. Three-dimensional time-dependent computer modeling of the electrothermal atomizers for analytical spectrometry

    NASA Astrophysics Data System (ADS)

    Tsivilskiy, I. V.; Nagulin, K. Yu.; Gilmutdinov, A. Kh.

    2016-02-01

    A full three-dimensional nonstationary numerical model of graphite electrothermal atomizers of various types is developed. The model is based on solution of a heat equation within solid walls of the atomizer with a radiative heat transfer and numerical solution of a full set of Navier-Stokes equations with an energy equation for a gas. Governing equations for the behavior of a discrete phase, i.e., atomic particles suspended in a gas (including gas-phase processes of evaporation and condensation), are derived from the formal equations molecular kinetics by numerical solution of the Hertz-Langmuir equation. The following atomizers test the model: a Varian standard heated electrothermal vaporizer (ETV), a Perkin Elmer standard THGA transversely heated graphite tube with integrated platform (THGA), and the original double-stage tube-helix atomizer (DSTHA). The experimental verification of computer calculations is carried out by a method of shadow spectral visualization of the spatial distributions of atomic and molecular vapors in an analytical space of an atomizer.

  2. Feedback Integrators

    NASA Astrophysics Data System (ADS)

    Chang, Dong Eui; Jiménez, Fernando; Perlmutter, Matthew

    2016-12-01

    A new method is proposed to numerically integrate a dynamical system on a manifold such that the trajectory stably remains on the manifold and preserves the first integrals of the system. The idea is that given an initial point in the manifold we extend the dynamics from the manifold to its ambient Euclidean space and then modify the dynamics outside the intersection of the manifold and the level sets of the first integrals containing the initial point such that the intersection becomes a unique local attractor of the resultant dynamics. While the modified dynamics theoretically produces the same trajectory as the original dynamics, it yields a numerical trajectory that stably remains on the manifold and preserves the first integrals. The big merit of our method is that the modified dynamics can be integrated with any ordinary numerical integrator such as Euler or Runge-Kutta. We illustrate this method by applying it to three famous problems: the free rigid body, the Kepler problem and a perturbed Kepler problem with rotational symmetry. We also carry out simulation studies to demonstrate the excellence of our method and make comparisons with the standard projection method, a splitting method and Störmer-Verlet schemes.

  3. Equalizing resolution in smoothed-particle hydrodynamics calculations using self-adaptive sinc kernels

    NASA Astrophysics Data System (ADS)

    García-Senz, Domingo; Cabezón, Rubén M.; Escartín, José A.; Ebinger, Kevin

    2014-10-01

    Context. The smoothed-particle hydrodynamics (SPH) technique is a numerical method for solving gas-dynamical problems. It has been applied to simulate the evolution of a wide variety of astrophysical systems. The method has a second-order accuracy, with a resolution that is usually much higher in the compressed regions than in the diluted zones of the fluid. Aims: We propose and check a method to balance and equalize the resolution of SPH between high- and low-density regions. This method relies on the versatility of a family of interpolators called sinc kernels, which allows increasing the interpolation quality by varying only a single parameter (the exponent of the sinc function). Methods: The proposed method was checked and validated through a number of numerical tests, from standard one-dimensional Riemann problems in shock tubes, to multidimensional simulations of explosions, hydrodynamic instabilities, and the collapse of a Sun-like polytrope. Results: The analysis of the hydrodynamical simulations suggests that the scheme devised to equalize the accuracy improves the treatment of the post-shock regions and, in general, of the rarefacted zones of fluids while causing no harm to the growth of hydrodynamic instabilities. The method is robust and easy to implement with a low computational overload. It conserves mass, energy, and momentum and reduces to the standard SPH scheme in regions of the fluid that have smooth density gradients.

  4. Time-of-flight PET time calibration using data consistency

    NASA Astrophysics Data System (ADS)

    Defrise, Michel; Rezaei, Ahmadreza; Nuyts, Johan

    2018-05-01

    This paper presents new data driven methods for the time of flight (TOF) calibration of positron emission tomography (PET) scanners. These methods are derived from the consistency condition for TOF PET, they can be applied to data measured with an arbitrary tracer distribution and are numerically efficient because they do not require a preliminary image reconstruction from the non-TOF data. Two-dimensional simulations are presented for one of the methods, which only involves the two first moments of the data with respect to the TOF variable. The numerical results show that this method estimates the detector timing offsets with errors that are larger than those obtained via an initial non-TOF reconstruction, but remain smaller than of the TOF resolution and thereby have a limited impact on the quantitative accuracy of the activity image estimated with standard maximum likelihood reconstruction algorithms.

  5. A Floating Node Method for the Modelling of Discontinuities Within a Finite Element

    NASA Technical Reports Server (NTRS)

    Pinho, Silvestre T.; Chen, B. Y.; DeCarvalho, Nelson V.; Baiz, P. M.; Tay, T. E.

    2013-01-01

    This paper focuses on the accurate numerical representation of complex networks of evolving discontinuities in solids, with particular emphasis on cracks. The limitation of the standard finite element method (FEM) in approximating discontinuous solutions has motivated the development of re-meshing, smeared crack models, the eXtended Finite Element Method (XFEM) and the Phantom Node Method (PNM). We propose a new method which has some similarities to the PNM, but crucially: (i) does not introduce an error on the crack geometry when mapping to natural coordinates; (ii) does not require numerical integration over only part of a domain; (iii) can incorporate weak discontinuities and cohesive cracks more readily; (iv) is ideally suited for the representation of multiple and complex networks of (weak, strong and cohesive) discontinuities; (v) leads to the same solution as a finite element mesh where the discontinuity is represented explicitly; and (vi) is conceptually simpler than the PNM.

  6. Statistical analysis of loopy belief propagation in random fields

    NASA Astrophysics Data System (ADS)

    Yasuda, Muneki; Kataoka, Shun; Tanaka, Kazuyuki

    2015-10-01

    Loopy belief propagation (LBP), which is equivalent to the Bethe approximation in statistical mechanics, is a message-passing-type inference method that is widely used to analyze systems based on Markov random fields (MRFs). In this paper, we propose a message-passing-type method to analytically evaluate the quenched average of LBP in random fields by using the replica cluster variation method. The proposed analytical method is applicable to general pairwise MRFs with random fields whose distributions differ from each other and can give the quenched averages of the Bethe free energies over random fields, which are consistent with numerical results. The order of its computational cost is equivalent to that of standard LBP. In the latter part of this paper, we describe the application of the proposed method to Bayesian image restoration, in which we observed that our theoretical results are in good agreement with the numerical results for natural images.

  7. Fully coupled methods for multiphase morphodynamics

    NASA Astrophysics Data System (ADS)

    Michoski, C.; Dawson, C.; Mirabito, C.; Kubatko, E. J.; Wirasaet, D.; Westerink, J. J.

    2013-09-01

    We present numerical methods for a system of equations consisting of the two dimensional Saint-Venant shallow water equations (SWEs) fully coupled to a completely generalized Exner formulation of hydrodynamically driven sediment discharge. This formulation is implemented by way of a discontinuous Galerkin (DG) finite element method, using a Roe Flux for the advective components and the unified form for the dissipative components. We implement a number of Runge-Kutta time integrators, including a family of strong stability preserving (SSP) schemes, and Runge-Kutta Chebyshev (RKC) methods. A brief discussion is provided regarding implementational details for generalizable computer algebra tokenization using arbitrary algebraic fluxes. We then run numerical experiments to show standard convergence rates, and discuss important mathematical and numerical nuances that arise due to prominent features in the coupled system, such as the emergence of nondifferentiable and sharp zero crossing functions, radii of convergence in manufactured solutions, and nonconservative product (NCP) formalisms. Finally we present a challenging application model concerning hydrothermal venting across metalliferous muds in the presence of chemical reactions occurring in low pH environments.

  8. Proper Generalized Decomposition (PGD) for the numerical simulation of polycrystalline aggregates under cyclic loading

    NASA Astrophysics Data System (ADS)

    Nasri, Mohamed Aziz; Robert, Camille; Ammar, Amine; El Arem, Saber; Morel, Franck

    2018-02-01

    The numerical modelling of the behaviour of materials at the microstructural scale has been greatly developed over the last two decades. Unfortunately, conventional resolution methods cannot simulate polycrystalline aggregates beyond tens of loading cycles, and they do not remain quantitative due to the plasticity behaviour. This work presents the development of a numerical solver for the resolution of the Finite Element modelling of polycrystalline aggregates subjected to cyclic mechanical loading. The method is based on two concepts. The first one consists in maintaining a constant stiffness matrix. The second uses a time/space model reduction method. In order to analyse the applicability and the performance of the use of a space-time separated representation, the simulations are carried out on a three-dimensional polycrystalline aggregate under cyclic loading. Different numbers of elements per grain and two time increments per cycle are investigated. The results show a significant CPU time saving while maintaining good precision. Moreover, increasing the number of elements and the number of time increments per cycle, the model reduction method is faster than the standard solver.

  9. Essentially nonoscillatory postprocessing filtering methods

    NASA Technical Reports Server (NTRS)

    Lafon, F.; Osher, S.

    1992-01-01

    High order accurate centered flux approximations used in the computation of numerical solutions to nonlinear partial differential equations produce large oscillations in regions of sharp transitions. Here, we present a new class of filtering methods denoted by Essentially Nonoscillatory Least Squares (ENOLS), which constructs an upgraded filtered solution that is close to the physically correct weak solution of the original evolution equation. Our method relies on the evaluation of a least squares polynomial approximation to oscillatory data using a set of points which is determined via the ENO network. Numerical results are given in one and two space dimensions for both scalar and systems of hyperbolic conservation laws. Computational running time, efficiency, and robustness of method are illustrated in various examples such as Riemann initial data for both Burgers' and Euler's equations of gas dynamics. In all standard cases, the filtered solution appears to converge numerically to the correct solution of the original problem. Some interesting results based on nonstandard central difference schemes, which exactly preserve entropy, and have been recently shown generally not to be weakly convergent to a solution of the conservation law, are also obtained using our filters.

  10. Numerical analysis of whole-body cryotherapy chamber design improvement

    NASA Astrophysics Data System (ADS)

    Yerezhep, D.; Tukmakova, A. S.; Fomin, V. E.; Masalimov, A.; Asach, A. V.; Novotelnova, A. V.; Baranov, A. Yu

    2018-05-01

    Whole body cryotherapy is a state-of-the-art method that uses cold for treatment and prevention of diseases. The process implies the impact of cryogenic gas on a human body that implements in a special cryochamber. The temperature field in the chamber is of great importance since local integument over-cooling may occur. Numerical simulation of WBC has been carried out. Chamber design modification has been proposed in order to increase the uniformity of the internal temperature field. The results have been compared with the ones obtained for a standard chamber design. The value of temperature gradient formed in the chamber containing curved wall with certain height has been decreased almost twice in comparison with the results obtained for the standard design. The modification proposed may increase both safety and comfort of cryotherapy.

  11. Modeling and simulation of different and representative engineering problems using Network Simulation Method

    PubMed Central

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model. PMID:29518121

  12. A multilevel control system for the large space telescope. [numerical analysis/optimal control

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.; Sundareshan, S. K.; Vukcevic, M. B.

    1975-01-01

    A multilevel scheme was proposed for control of Large Space Telescope (LST) modeled by a three-axis-six-order nonlinear equation. Local controllers were used on the subsystem level to stabilize motions corresponding to the three axes. Global controllers were applied to reduce (and sometimes nullify) the interactions among the subsystems. A multilevel optimization method was developed whereby local quadratic optimizations were performed on the subsystem level, and global control was again used to reduce (nullify) the effect of interactions. The multilevel stabilization and optimization methods are presented as general tools for design and then used in the design of the LST Control System. The methods are entirely computerized, so that they can accommodate higher order LST models with both conceptual and numerical advantages over standard straightforward design techniques.

  13. Modeling and simulation of different and representative engineering problems using Network Simulation Method.

    PubMed

    Sánchez-Pérez, J F; Marín, F; Morales, J L; Cánovas, M; Alhama, F

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model.

  14. Novel numerical and graphical representation of DNA sequences and proteins.

    PubMed

    Randić, M; Novic, M; Vikić-Topić, D; Plavsić, D

    2006-12-01

    We have introduced novel numerical and graphical representations of DNA, which offer a simple and unique characterization of DNA sequences. The numerical representation of a DNA sequence is given as a sequence of real numbers derived from a unique graphical representation of the standard genetic code. There is no loss of information on the primary structure of a DNA sequence associated with this numerical representation. The novel representations are illustrated with the coding sequences of the first exon of beta-globin gene of half a dozen species in addition to human. The method can be extended to proteins as is exemplified by humanin, a 24-aa peptide that has recently been identified as a specific inhibitor of neuronal cell death induced by familial Alzheimer's disease mutant genes.

  15. An acoustic-convective splitting-based approach for the Kapila two-phase flow model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eikelder, M.F.P. ten, E-mail: m.f.p.teneikelder@tudelft.nl; Eindhoven University of Technology, Department of Mathematics and Computer Science, P.O. Box 513, 5600 MB Eindhoven; Daude, F.

    In this paper we propose a new acoustic-convective splitting-based numerical scheme for the Kapila five-equation two-phase flow model. The splitting operator decouples the acoustic waves and convective waves. The resulting two submodels are alternately numerically solved to approximate the solution of the entire model. The Lagrangian form of the acoustic submodel is numerically solved using an HLLC-type Riemann solver whereas the convective part is approximated with an upwind scheme. The result is a simple method which allows for a general equation of state. Numerical computations are performed for standard two-phase shock tube problems. A comparison is made with a non-splittingmore » approach. The results are in good agreement with reference results and exact solutions.« less

  16. A Parallel, Finite-Volume Algorithm for Large-Eddy Simulation of Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Bui, Trong T.

    1999-01-01

    A parallel, finite-volume algorithm has been developed for large-eddy simulation (LES) of compressible turbulent flows. This algorithm includes piecewise linear least-square reconstruction, trilinear finite-element interpolation, Roe flux-difference splitting, and second-order MacCormack time marching. Parallel implementation is done using the message-passing programming model. In this paper, the numerical algorithm is described. To validate the numerical method for turbulence simulation, LES of fully developed turbulent flow in a square duct is performed for a Reynolds number of 320 based on the average friction velocity and the hydraulic diameter of the duct. Direct numerical simulation (DNS) results are available for this test case, and the accuracy of this algorithm for turbulence simulations can be ascertained by comparing the LES solutions with the DNS results. The effects of grid resolution, upwind numerical dissipation, and subgrid-scale dissipation on the accuracy of the LES are examined. Comparison with DNS results shows that the standard Roe flux-difference splitting dissipation adversely affects the accuracy of the turbulence simulation. For accurate turbulence simulations, only 3-5 percent of the standard Roe flux-difference splitting dissipation is needed.

  17. A Ghost Fluid/Level Set Method for boiling flows and liquid evaporation: Application to the Leidenfrost effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rueda Villegas, Lucia; Alis, Romain; Lepilliez, Mathieu

    2016-07-01

    The development of numerical methods for the direct numerical simulation of two-phase flows with phase change, in the framework of interface capturing or interface tracking methods, is the main topic of this study. We propose a novel numerical method, which allows dealing with both evaporation and boiling at the interface between a liquid and a gas. Indeed, in some specific situations involving very heterogeneous thermodynamic conditions at the interface, the distinction between boiling and evaporation is not always possible. For instance, it can occur for a Leidenfrost droplet; a water drop levitating above a hot plate whose temperature is muchmore » higher than the boiling temperature. In this case, boiling occurs in the film of saturated vapor which is entrapped between the bottom of the drop and the plate, whereas the top of the water droplet evaporates in contact of ambient air. The situation can also be ambiguous for a superheated droplet or at the contact line between a liquid and a hot wall whose temperature is higher than the saturation temperature of the liquid. In these situations, the interface temperature can locally reach the saturation temperature (boiling point), for instance near a contact line, and be cooler in other places. Thus, boiling and evaporation can occur simultaneously on different regions of the same liquid interface or occur successively at different times of the history of an evaporating droplet. Standard numerical methods are not able to perform computations in these transient regimes, therefore, we propose in this paper a novel numerical method to achieve this challenging task. Finally, we present several accuracy validations against theoretical solutions and experimental results to strengthen the relevance of this new method.« less

  18. The vector radiative transfer numerical model of coupled ocean-atmosphere system using the matrix-operator method

    NASA Astrophysics Data System (ADS)

    Xianqiang, He; Delu, Pan; Yan, Bai; Qiankun, Zhu

    2005-10-01

    The numerical model of the vector radiative transfer of the coupled ocean-atmosphere system is developed based on the matrix-operator method, which is named PCOART. In PCOART, using the Fourier analysis, the vector radiative transfer equation (VRTE) splits up into a set of independent equations with zenith angle as only angular coordinate. Using the Gaussian-Quadrature method, VRTE is finally transferred into the matrix equation, which is calculated by using the adding-doubling method. According to the reflective and refractive properties of the ocean-atmosphere interface, the vector radiative transfer numerical model of ocean and atmosphere is coupled in PCOART. By comparing with the exact Rayleigh scattering look-up-table of MODIS(Moderate-resolution Imaging Spectroradiometer), it is shown that PCOART is an exact numerical calculation model, and the processing methods of the multi-scattering and polarization are correct in PCOART. Also, by validating with the standard problems of the radiative transfer in water, it is shown that PCOART could be used to calculate the underwater radiative transfer problems. Therefore, PCOART is a useful tool to exactly calculate the vector radiative transfer of the coupled ocean-atmosphere system, which can be used to study the polarization properties of the radiance in the whole ocean-atmosphere system and the remote sensing of the atmosphere and ocean.

  19. Recent advances in numerical PDEs

    NASA Astrophysics Data System (ADS)

    Zuev, Julia Michelle

    In this thesis, we investigate four neighboring topics, all in the general area of numerical methods for solving Partial Differential Equations (PDEs). Topic 1. Radial Basis Functions (RBF) are widely used for multi-dimensional interpolation of scattered data. This methodology offers smooth and accurate interpolants, which can be further refined, if necessary, by clustering nodes in select areas. We show, however, that local refinements with RBF (in a constant shape parameter [varepsilon] regime) may lead to the oscillatory errors associated with the Runge phenomenon (RP). RP is best known in the case of high-order polynomial interpolation, where its effects can be accurately predicted via Lebesgue constant L (which is based solely on the node distribution). We study the RP and the applicability of Lebesgue constant (as well as other error measures) in RBF interpolation. Mainly, we allow for a spatially variable shape parameter, and demonstrate how it can be used to suppress RP-like edge effects and to improve the overall stability and accuracy. Topic 2. Although not as versatile as RBFs, cubic splines are useful for interpolating grid-based data. In 2-D, we consider a patch representation via Hermite basis functions s i,j ( u, v ) = [Special characters omitted.] h mn H m ( u ) H n ( v ), as opposed to the standard bicubic representation. Stitching requirements for the rectangular non-equispaced grid yield a 2-D tridiagonal linear system AX = B, where X represents the unknown first derivatives. We discover that the standard methods for solving this NxM system do not take advantage of the spline-specific format of the matrix B. We develop an alternative approach using this specialization of the RHS, which allows us to pre-compute coefficients only once, instead of N times. MATLAB implementation of our fast 2-D cubic spline algorithm is provided. We confirm analytically and numerically that for large N ( N > 200), our method is at least 3 times faster than the standard algorithm and is just as accurate. Topic 3. The well-known ADI-FDTD method for solving Maxwell's curl equations is second-order accurate in space/time, unconditionally stable, and computationally efficient. We research Richardson extrapolation -based techniques to improve time discretization accuracy for spatially oversampled ADI-FDTD. A careful analysis of temporal accuracy, computational efficiency, and the algorithm's overall stability is presented. Given the context of wave- type PDEs, we find that only a limited number of extrapolations to the ADI-FDTD method are beneficial, if its unconditional stability is to be preserved. We propose a practical approach for choosing the size of a time step that can be used to improve the efficiency of the ADI-FDTD algorithm, while maintaining its accuracy and stability. Topic 4. Shock waves and their energy dissipation properties are critical to understanding the dynamics controlling the MHD turbulence. Numerical advection algorithms used in MHD solvers (e.g. the ZEUS package) introduce undesirable numerical viscosity. To counteract its effects and to resolve shocks numerically, Richtmyer and von Neumann's artificial viscosity is commonly added to the model. We study shock power by analyzing the influence of both artificial and numerical viscosity on energy decay rates. Also, we analytically characterize the numerical diffusivity of various advection algorithms by quantifying their diffusion coefficients e.

  20. Numerical Uncertainty Analysis for Computational Fluid Dynamics using Student T Distribution -- Application of CFD Uncertainty Analysis Compared to Exact Analytical Solution

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.

    2014-01-01

    Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.

  1. Activities: Activities to Introduce Maxima-Minima Problems.

    ERIC Educational Resources Information Center

    Pleacher, David

    1991-01-01

    Presented are student activities that involve two standard problems from geometry and calculus--the volume of a box and the bank shot on a pool table. Problem solving is emphasized as a method of inquiry and application with descriptions of the results using graphical, numerical, and physical models. (JJK)

  2. Co-state initialization for the minimum-time low-thrust trajectory optimization

    NASA Astrophysics Data System (ADS)

    Taheri, Ehsan; Li, Nan I.; Kolmanovsky, Ilya

    2017-05-01

    This paper presents an approach for co-state initialization which is a critical step in solving minimum-time low-thrust trajectory optimization problems using indirect optimal control numerical methods. Indirect methods used in determining the optimal space trajectories typically result in two-point boundary-value problems and are solved by single- or multiple-shooting numerical methods. Accurate initialization of the co-state variables facilitates the numerical convergence of iterative boundary value problem solvers. In this paper, we propose a method which exploits the trajectory generated by the so-called pseudo-equinoctial and three-dimensional finite Fourier series shape-based methods to estimate the initial values of the co-states. The performance of the approach for two interplanetary rendezvous missions from Earth to Mars and from Earth to asteroid Dionysus is compared against three other approaches which, respectively, exploit random initialization of co-states, adjoint-control transformation and a standard genetic algorithm. The results indicate that by using our proposed approach the percent of the converged cases is higher for trajectories with higher number of revolutions while the computation time is lower. These features are advantageous for broad trajectory search in the preliminary phase of mission designs.

  3. Numerical Exposure Assessment Method for Low Frequency Range and Application to Wireless Power Transfer.

    PubMed

    Park, SangWook; Kim, Minhyuk

    2016-01-01

    In this paper, a numerical exposure assessment method is presented for a quasi-static analysis by the use of finite-difference time-domain (FDTD) algorithm. The proposed method is composed of scattered field FDTD method and quasi-static approximation for analyzing of the low frequency band electromagnetic problems. The proposed method provides an effective tool to compute induced electric fields in an anatomically realistic human voxel model exposed to an arbitrary non-uniform field source in the low frequency ranges. The method is verified, and excellent agreement with theoretical solutions is found for a dielectric sphere model exposed to a magnetic dipole source. The assessment method serves a practical example of the electric fields, current densities, and specific absorption rates induced in a human head and body in close proximity to a 150-kHz wireless power transfer system for cell phone charging. The results are compared to the limits recommended by the International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the IEEE standard guidelines.

  4. Numerical Exposure Assessment Method for Low Frequency Range and Application to Wireless Power Transfer

    PubMed Central

    Kim, Minhyuk

    2016-01-01

    In this paper, a numerical exposure assessment method is presented for a quasi-static analysis by the use of finite-difference time-domain (FDTD) algorithm. The proposed method is composed of scattered field FDTD method and quasi-static approximation for analyzing of the low frequency band electromagnetic problems. The proposed method provides an effective tool to compute induced electric fields in an anatomically realistic human voxel model exposed to an arbitrary non-uniform field source in the low frequency ranges. The method is verified, and excellent agreement with theoretical solutions is found for a dielectric sphere model exposed to a magnetic dipole source. The assessment method serves a practical example of the electric fields, current densities, and specific absorption rates induced in a human head and body in close proximity to a 150-kHz wireless power transfer system for cell phone charging. The results are compared to the limits recommended by the International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the IEEE standard guidelines. PMID:27898688

  5. Embedded WENO: A design strategy to improve existing WENO schemes

    NASA Astrophysics Data System (ADS)

    van Lith, Bart S.; ten Thije Boonkkamp, Jan H. M.; IJzerman, Wilbert L.

    2017-02-01

    Embedded WENO methods utilise all adjacent smooth substencils to construct a desirable interpolation. Conventional WENO schemes under-use this possibility close to large gradients or discontinuities. We develop a general approach for constructing embedded versions of existing WENO schemes. Embedded methods based on the WENO schemes of Jiang and Shu [1] and on the WENO-Z scheme of Borges et al. [2] are explicitly constructed. Several possible choices are presented that result in either better spectral properties or a higher order of convergence for sufficiently smooth solutions. However, these improvements carry over to discontinuous solutions. The embedded methods are demonstrated to be indeed improvements over their standard counterparts by several numerical examples. All the embedded methods presented have no added computational effort compared to their standard counterparts.

  6. Reliability of engineering methods of assessment the critical buckling load of steel beams

    NASA Astrophysics Data System (ADS)

    Rzeszut, Katarzyna; Folta, Wiktor; Garstecki, Andrzej

    2018-01-01

    In this paper the reliability assessment of buckling resistance of steel beam is presented. A number of parameters such as: the boundary conditions, the section height to width ratio, the thickness and the span are considered. The examples are solved using FEM procedures and formulas proposed in the literature and standards. In the case of the numerical models the following parameters are investigated: support conditions, mesh size, load conditions, steel grade. The numerical results are compared with approximate solutions calculated according to the standard formulas. It was observed that for high slenderness section the deformation of the cross-section had to be described by the following modes: longitudinal and transverse displacement, warping, rotation and distortion of the cross section shape. In this case we face interactive buckling problem. Unfortunately, neither the EN Standards nor the subject literature give close-form formulas to solve these problems. For this reason the reliability of the critical bending moment calculations is discussed.

  7. Uncertainty Analysis of Decomposing Polyurethane Foam

    NASA Technical Reports Server (NTRS)

    Hobbs, Michael L.; Romero, Vicente J.

    2000-01-01

    Sensitivity/uncertainty analyses are necessary to determine where to allocate resources for improved predictions in support of our nation's nuclear safety mission. Yet, sensitivity/uncertainty analyses are not commonly performed on complex combustion models because the calculations are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, a variety of sensitivity/uncertainty analyses were used to determine the uncertainty associated with thermal decomposition of polyurethane foam exposed to high radiative flux boundary conditions. The polyurethane used in this study is a rigid closed-cell foam used as an encapsulant. Related polyurethane binders such as Estane are used in many energetic materials of interest to the JANNAF community. The complex, finite element foam decomposition model used in this study has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state decomposition front velocity calculated as the derivative of the decomposition front location versus time. An analytical mean value sensitivity/uncertainty (MV) analysis was used to determine the standard deviation by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation was essentially determined from a second derivative that was extremely sensitive to numerical noise. To minimize the numerical noise, 50-micrometer element dimensions and approximately 1-msec time steps were required to obtain stable uncertainty results. As an alternative method to determine the uncertainty and sensitivity in the decomposition front velocity, surrogate response surfaces were generated for use with a constrained Latin Hypercube Sampling (LHS) technique. Two surrogate response surfaces were investigated: 1) a linear surrogate response surface (LIN) and 2) a quadratic response surface (QUAD). The LHS techniques do not require derivatives of the response variable and are subsequently relatively insensitive to numerical noise. To compare the LIN and QUAD methods to the MV method, a direct LHS analysis (DLHS) was performed using the full grid and timestep resolved finite element model. The surrogate response models (LIN and QUAD) are shown to give acceptable values of the mean and standard deviation when compared to the fully converged DLHS model.

  8. Alternating Direction Implicit (ADI) schemes for a PDE-based image osmosis model

    NASA Astrophysics Data System (ADS)

    Calatroni, L.; Estatico, C.; Garibaldi, N.; Parisotto, S.

    2017-10-01

    We consider Alternating Direction Implicit (ADI) splitting schemes to compute efficiently the numerical solution of the PDE osmosis model considered by Weickert et al. in [10] for several imaging applications. The discretised scheme is shown to preserve analogous properties to the continuous model. The dimensional splitting strategy traduces numerically into the solution of simple tridiagonal systems for which standard matrix factorisation techniques can be used to improve upon the performance of classical implicit methods, even for large time steps. Applications to the shadow removal problem are presented.

  9. Accelerating 4D flow MRI by exploiting vector field divergence regularization.

    PubMed

    Santelli, Claudio; Loecher, Michael; Busch, Julia; Wieben, Oliver; Schaeffter, Tobias; Kozerke, Sebastian

    2016-01-01

    To improve velocity vector field reconstruction from undersampled four-dimensional (4D) flow MRI by penalizing divergence of the measured flow field. Iterative image reconstruction in which magnitude and phase are regularized separately in alternating iterations was implemented. The approach allows incorporating prior knowledge of the flow field being imaged. In the present work, velocity data were regularized to reduce divergence, using either divergence-free wavelets (DFW) or a finite difference (FD) method using the ℓ1-norm of divergence and curl. The reconstruction methods were tested on a numerical phantom and in vivo data. Results of the DFW and FD approaches were compared with data obtained with standard compressed sensing (CS) reconstruction. Relative to standard CS, directional errors of vector fields and divergence were reduced by 55-60% and 38-48% for three- and six-fold undersampled data with the DFW and FD methods. Velocity vector displays of the numerical phantom and in vivo data were found to be improved upon DFW or FD reconstruction. Regularization of vector field divergence in image reconstruction from undersampled 4D flow data is a valuable approach to improve reconstruction accuracy of velocity vector fields. © 2014 Wiley Periodicals, Inc.

  10. Variational symplectic algorithm for guiding center dynamics in the inner magnetosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Jinxing; Pu Zuyin; Xie Lun

    Charged particle dynamics in magnetosphere has temporal and spatial multiscale; therefore, numerical accuracy over a long integration time is required. A variational symplectic integrator (VSI) [H. Qin and X. Guan, Phys. Rev. Lett. 100, 035006 (2008) and H. Qin, X. Guan, and W. M. Tang, Phys. Plasmas 16, 042510 (2009)] for the guiding-center motion of charged particles in general magnetic field is applied to study the dynamics of charged particles in magnetosphere. Instead of discretizing the differential equations of the guiding-center motion, the action of the guiding-center motion is discretized and minimized to obtain the iteration rules for advancing themore » dynamics. The VSI conserves exactly a discrete Lagrangian symplectic structure and has better numerical properties over a long integration time, compared with standard integrators, such as the standard and adaptive fourth order Runge-Kutta (RK4) methods. Applying the VSI method to guiding-center dynamics in the inner magnetosphere, we can accurately calculate the particles'orbits for an arbitrary long simulating time with good conservation property. When a time-independent convection and corotation electric field is considered, the VSI method can give the accurate single particle orbit, while the RK4 method gives an incorrect orbit due to its intrinsic error accumulation over a long integrating time.« less

  11. Simulation of Liquid Droplet in Air and on a Solid Surface

    NASA Astrophysics Data System (ADS)

    Launglucknavalai, Kevin

    Although multiphase gas and liquid phenomena occurs widely in engineering problems, many aspects of multiphase interaction like within droplet dynamics are still not quantified. This study aims to qualify the Lattice Boltzmann (LBM) Interparticle Potential multiphase computational method in order to build a foundation for future multiphase research. This study consists of two overall sections. The first section in Chapter 2 focuses on understanding the LBM method and Interparticle Potential model. It outlines the LBM method and how it relates to macroscopic fluid dynamics. The standard form of LBM is obtained. The perturbation solution obtaining the Navier-Stokes equations from the LBM equation is presented. Finally, the Interparticle Potential model is incorporated into the numerical LBM method. The second section in Chapter 3 presents the verification and validation cases to confirm the behavior of the single-phase and multiphase LBM models. Experimental and analytical results are used briefly to compare with numerical results when possible using Poiseuille channel flow and flow over a cylinder. While presenting the numerical results, practical considerations like converting LBM scale variables to physical scale variables are considered. Multiphase results are verified using Laplaces law and artificial behaviors of the model are explored. In this study, a better understanding of the LBM method and Interparticle Potential model is gained. This allows the numerical method to be used for comparison with experimental results in the future and provides a better understanding of multiphase physics overall.

  12. Eighth-order explicit two-step hybrid methods with symmetric nodes and weights for solving orbital and oscillatory IVPs

    NASA Astrophysics Data System (ADS)

    Franco, J. M.; Rández, L.

    The construction of new two-step hybrid (TSH) methods of explicit type with symmetric nodes and weights for the numerical integration of orbital and oscillatory second-order initial value problems (IVPs) is analyzed. These methods attain algebraic order eight with a computational cost of six or eight function evaluations per step (it is one of the lowest costs that we know in the literature) and they are optimal among the TSH methods in the sense that they reach a certain order of accuracy with minimal cost per step. The new TSH schemes also have high dispersion and dissipation orders (greater than 8) in order to be adapted to the solution of IVPs with oscillatory solutions. The numerical experiments carried out with several orbital and oscillatory problems show that the new eighth-order explicit TSH methods are more efficient than other standard TSH or Numerov-type methods proposed in the scientific literature.

  13. Beyond single-stream with the Schrödinger method

    NASA Astrophysics Data System (ADS)

    Uhlemann, Cora; Kopp, Michael

    2016-10-01

    We investigate large scale structure formation of collisionless dark matter in the phase space description based on the Vlasov-Poisson equation. We present the Schrödinger method, originally proposed by \\cite{WK93} as numerical technique based on the Schrödinger Poisson equation, as an analytical tool which is superior to the common standard pressureless fluid model. Whereas the dust model fails and develops singularities at shell crossing the Schrödinger method encompasses multi-streaming and even virialization.

  14. Analysis of drift correction in different simulated weighing schemes

    NASA Astrophysics Data System (ADS)

    Beatrici, A.; Rebelo, A.; Quintão, D.; Cacais, F. L.; Loayza, V. M.

    2015-10-01

    In the calibration of high accuracy mass standards some weighing schemes are used to reduce or eliminate the zero drift effects in mass comparators. There are different sources for the drift and different methods for its treatment. By using numerical methods, drift functions were simulated and a random term was included in each function. The comparison between the results obtained from ABABAB and ABBA weighing series was carried out. The results show a better efficacy of ABABAB method for drift with smooth variation and small randomness.

  15. Perception of Science Standards' Effectiveness and Their Implementation by Science Teachers

    NASA Astrophysics Data System (ADS)

    Klieger, Aviva; Yakobovitch, Anat

    2011-06-01

    The introduction of standards into the education system poses numerous challenges and difficulties. As with any change, plans should be made for teachers to understand and implement the standards. This study examined science teachers' perceptions of the effectiveness of the standards for teaching and learning, and the extent and ease/difficulty of implementing science standards in different grades. The research used a mixed methods approach, combining qualitative and quantitative research methods. The research tools were questionnaires that were administered to elementary school science teachers. The majority of the teachers perceived the standards in science as effective for teaching and learning and only a small minority viewed them as restricting their pedagogical autonomy. Differences were found in the extent of implementation of the different standards and between different grades. The teachers perceived a different degree of difficulty in the implementation of the different standards. The standards experienced as easiest to implement were in the field of biology and materials, whereas the standards in earth sciences and the universe and technology were most difficult to implement, and are also those evaluated by the teachers as being implemented to the least extent. Exposure of teachers' perceptions on the effectiveness of standards and the implementation of the standards may aid policymakers in future planning of teachers' professional development for the implementation of standards.

  16. Variational optical flow computation in real time.

    PubMed

    Bruhn, Andrés; Weickert, Joachim; Feddern, Christian; Kohlberger, Timo; Schnörr, Christoph

    2005-05-01

    This paper investigates the usefulness of bidirectional multigrid methods for variational optical flow computations. Although these numerical schemes are among the fastest methods for solving equation systems, they are rarely applied in the field of computer vision. We demonstrate how to employ those numerical methods for the treatment of variational optical flow formulations and show that the efficiency of this approach even allows for real-time performance on standard PCs. As a representative for variational optic flow methods, we consider the recently introduced combined local-global method. It can be considered as a noise-robust generalization of the Horn and Schunck technique. We present a decoupled, as well as a coupled, version of the classical Gauss-Seidel solver, and we develop several multgrid implementations based on a discretization coarse grid approximation. In contrast, with standard bidirectional multigrid algorithms, we take advantage of intergrid transfer operators that allow for nondyadic grid hierarchies. As a consequence, no restrictions concerning the image size or the number of traversed levels have to be imposed. In the experimental section, we juxtapose the developed multigrid schemes and demonstrate their superior performance when compared to unidirectional multgrid methods and nonhierachical solvers. For the well-known 316 x 252 Yosemite sequence, we succeeded in computing the complete set of dense flow fields in three quarters of a second on a 3.06-GHz Pentium4 PC. This corresponds to a frame rate of 18 flow fields per second which outperforms the widely-used Gauss-Seidel method by almost three orders of magnitude.

  17. Numerical simulation of jet aerodynamics using the three-dimensional Navier-Stokes code PAB3D

    NASA Technical Reports Server (NTRS)

    Pao, S. Paul; Abdol-Hamid, Khaled S.

    1996-01-01

    This report presents a unified method for subsonic and supersonic jet analysis using the three-dimensional Navier-Stokes code PAB3D. The Navier-Stokes code was used to obtain solutions for axisymmetric jets with on-design operating conditions at Mach numbers ranging from 0.6 to 3.0, supersonic jets containing weak shocks and Mach disks, and supersonic jets with nonaxisymmetric nozzle exit geometries. This report discusses computational methods, code implementation, computed results, and comparisons with available experimental data. Very good agreement is shown between the numerical solutions and available experimental data over a wide range of operating conditions. The Navier-Stokes method using the standard Jones-Launder two-equation kappa-epsilon turbulence model can accurately predict jet flow, and such predictions are made without any modification to the published constants for the turbulence model.

  18. Optimization of droplets for UV-NIL using coarse-grain simulation of resist flow

    NASA Astrophysics Data System (ADS)

    Sirotkin, Vadim; Svintsov, Alexander; Zaitsev, Sergey

    2009-03-01

    A mathematical model and numerical method are described, which make it possible to simulate ultraviolet ("step and flash") nanoimprint lithography (UV-NIL) process adequately even using standard Personal Computers. The model is derived from 3D Navier-Stokes equations with the understanding that the resist motion is largely directed along the substrate surface and characterized by ultra-low values of the Reynolds number. By the numerical approximation of the model, a special finite difference method is applied (a coarse-grain method). A coarse-grain modeling tool for detailed analysis of resist spreading in UV-NIL at the structure-scale level is tested. The obtained results demonstrate the high ability of the tool to calculate optimal dispensing for given stamp design and process parameters. This dispensing provides uniform filled areas and a homogeneous residual layer thickness in UV-NIL.

  19. A Numerical Combination of Extended Boundary Condition Method and Invariant Imbedding Method Applied to Light Scattering by Large Spheroids and Cylinders

    NASA Technical Reports Server (NTRS)

    Bi, Lei; Yang, Ping; Kattawar, George W.; Mishchenko, Michael I.

    2013-01-01

    The extended boundary condition method (EBCM) and invariant imbedding method (IIM) are two fundamentally different T-matrix methods for the solution of light scattering by nonspherical particles. The standard EBCM is very efficient but encounters a loss of precision when the particle size is large, the maximum size being sensitive to the particle aspect ratio. The IIM can be applied to particles in a relatively large size parameter range but requires extensive computational time due to the number of spherical layers in the particle volume discretization. A numerical combination of the EBCM and the IIM (hereafter, the EBCM+IIM) is proposed to overcome the aforementioned disadvantages of each method. Even though the EBCM can fail to obtain the T-matrix of a considered particle, it is valuable for decreasing the computational domain (i.e., the number of spherical layers) of the IIM by providing the initial T-matrix associated with an iterative procedure in the IIM. The EBCM+IIM is demonstrated to be more efficient than the IIM in obtaining the optical properties of large size parameter particles beyond the convergence limit of the EBCM. The numerical performance of the EBCM+IIM is illustrated through representative calculations in spheroidal and cylindrical particle cases.

  20. Comparative study of three modified numerical spectrophotometric methods: An application on pharmaceutical ternary mixture of aspirin, atorvastatin and clopedogrel

    NASA Astrophysics Data System (ADS)

    Issa, Mahmoud Mohamed; Nejem, R.'afat Mahmoud; Shanab, Alaa Abu; Hegazy, Nahed Diab; Stefan-van Staden, Raluca-Ioana

    2014-07-01

    Three novel numerical methods were developed for the spectrophotometric multi-component analysis of capsules and synthetic mixtures of aspirin, atorvastatin and clopedogrel without any chemical separation. The subtraction method is based on the relationship between the difference in absorbance at four wavelengths and corresponding concentration of analyte. In this method, the linear determination ranges were 0.8-40 μg mL-1 aspirin, 0.8-30 μg mL-1 atorvastatin and 0.5-30 μg mL-1 clopedogrel. In the quotient method, 0.8-40 μg mL-1 aspirin, 0.8-30 μg mL-1 atorvastatin and 1.0-30 μg mL-1 clopedogrel were determine from spectral data at the wavelength pairs that show the same ratio of absorbance for other two species. Standard addition method was used for resolving ternary mixture of 1.0-40 μg mL-1 aspirin, 0.8-30 μg mL-1 atorvastatin and 2.0-30 μg mL-1 clopedogrel. The proposed methods were validated. The reproducibility and repeatability were found satisfactory which evidence was by low values of relative standard deviation (<2%). Recovery was found to be in the range (99.6-100.8%). By adopting these methods, the time taken for analysis was reduced as these methods involve very limited steps. The developed methods were applied for simultaneous analysis of aspirin, atorvastatin and clopedogrel in capsule dosage forms and results were in good concordance with alternative liquid chromatography.

  1. Time delayed Ensemble Nudging Method

    NASA Astrophysics Data System (ADS)

    An, Zhe; Abarbanel, Henry

    Optimal nudging method based on time delayed embedding theory has shows potentials on analyzing and data assimilation in previous literatures. To extend the application and promote the practical implementation, new nudging assimilation method based on the time delayed embedding space is presented and the connection with other standard assimilation methods are studied. Results shows the incorporating information from the time series of data can reduce the sufficient observation needed to preserve the quality of numerical prediction, making it a potential alternative in the field of data assimilation of large geophysical models.

  2. Streaming potential modeling in fractured rock: Insights into the identification of hydraulically active fractures

    NASA Astrophysics Data System (ADS)

    Roubinet, D.; Linde, N.; Jougnot, D.; Irving, J.

    2016-05-01

    Numerous field experiments suggest that the self-potential (SP) geophysical method may allow for the detection of hydraulically active fractures and provide information about fracture properties. However, a lack of suitable numerical tools for modeling streaming potentials in fractured media prevents quantitative interpretation and limits our understanding of how the SP method can be used in this regard. To address this issue, we present a highly efficient two-dimensional discrete-dual-porosity approach for solving the fluid flow and associated self-potential problems in fractured rock. Our approach is specifically designed for complex fracture networks that cannot be investigated using standard numerical methods. We then simulate SP signals associated with pumping conditions for a number of examples to show that (i) accounting for matrix fluid flow is essential for accurate SP modeling and (ii) the sensitivity of SP to hydraulically active fractures is intimately linked with fracture-matrix fluid interactions. This implies that fractures associated with strong SP amplitudes are likely to be hydraulically conductive, attracting fluid flow from the surrounding matrix.

  3. Numerical solutions of ideal quantum gas dynamical flows governed by semiclassical ellipsoidal-statistical distribution

    PubMed Central

    Yang, Jaw-Yen; Yan, Chih-Yuan; Diaz, Manuel; Huang, Juan-Chen; Li, Zhihui; Zhang, Hanxin

    2014-01-01

    The ideal quantum gas dynamics as manifested by the semiclassical ellipsoidal-statistical (ES) equilibrium distribution derived in Wu et al. (Wu et al. 2012 Proc. R. Soc. A 468, 1799–1823 (doi:10.1098/rspa.2011.0673)) is numerically studied for particles of three statistics. This anisotropic ES equilibrium distribution was derived using the maximum entropy principle and conserves the mass, momentum and energy, but differs from the standard Fermi–Dirac or Bose–Einstein distribution. The present numerical method combines the discrete velocity (or momentum) ordinate method in momentum space and the high-resolution shock-capturing method in physical space. A decoding procedure to obtain the necessary parameters for determining the ES distribution is also devised. Computations of two-dimensional Riemann problems are presented, and various contours of the quantities unique to this ES model are illustrated. The main flow features, such as shock waves, expansion waves and slip lines and their complex nonlinear interactions, are depicted and found to be consistent with existing calculations for a classical gas. PMID:24399919

  4. Noiseless Vlasov-Poisson simulations with linearly transformed particles

    DOE PAGES

    Pinto, Martin C.; Sonnendrucker, Eric; Friedman, Alex; ...

    2014-06-25

    We introduce a deterministic discrete-particle simulation approach, the Linearly-Transformed Particle-In-Cell (LTPIC) method, that employs linear deformations of the particles to reduce the noise traditionally associated with particle schemes. Formally, transforming the particles is justified by local first order expansions of the characteristic flow in phase space. In practice the method amounts of using deformation matrices within the particle shape functions; these matrices are updated via local evaluations of the forward numerical flow. Because it is necessary to periodically remap the particles on a regular grid to avoid excessively deforming their shapes, the method can be seen as a development ofmore » Denavit's Forward Semi-Lagrangian (FSL) scheme (Denavit, 1972 [8]). However, it has recently been established (Campos Pinto, 2012 [20]) that the underlying Linearly-Transformed Particle scheme converges for abstract transport problems, with no need to remap the particles; deforming the particles can thus be seen as a way to significantly lower the remapping frequency needed in the FSL schemes, and hence the associated numerical diffusion. To couple the method with electrostatic field solvers, two specific charge deposition schemes are examined, and their performance compared with that of the standard deposition method. Finally, numerical 1d1v simulations involving benchmark test cases and halo formation in an initially mismatched thermal sheet beam demonstrate some advantages of our LTPIC scheme over the classical PIC and FSL methods. Lastly, benchmarked test cases also indicate that, for numerical choices involving similar computational effort, the LTPIC method is capable of accuracy comparable to or exceeding that of state-of-the-art, high-resolution Vlasov schemes.« less

  5. A Superior Kirchhoff Method for Aeroacoustic Noise Prediction: The Ffowcs Williams-Hawkings Equation

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.

    1997-01-01

    The prediction of aeroacoustic noise is important; all new aircraft must meet noise certification requirements. Local noise standards can be even more stringent. The NASA noise reduction goal is to reduce perceived noise levels by a factor of two in 10 years. The objective of this viewgraph presentation is to demonstrate the superiority of the FW-H approach over the Kirchoff method for aeroacoustics, both analytically and numerically.

  6. Computational domain discretization in numerical analysis of flow within granular materials

    NASA Astrophysics Data System (ADS)

    Sosnowski, Marcin

    2018-06-01

    The discretization of computational domain is a crucial step in Computational Fluid Dynamics (CFD) because it influences not only the numerical stability of the analysed model but also the agreement of obtained results and real data. Modelling flow in packed beds of granular materials is a very challenging task in terms of discretization due to the existence of narrow spaces between spherical granules contacting tangentially in a single point. Standard approach to this issue results in a low quality mesh and unreliable results in consequence. Therefore the common method is to reduce the diameter of the modelled granules in order to eliminate the single-point contact between the individual granules. The drawback of such method is the adulteration of flow and contact heat resistance among others. Therefore an innovative method is proposed in the paper: single-point contact is extended to a cylinder-shaped volume contact. Such approach eliminates the low quality mesh elements and simultaneously introduces only slight distortion to the flow as well as contact heat transfer. The performed analysis of numerous test cases prove the great potential of the proposed method of meshing the packed beds of granular materials.

  7. Seismic loading due to mining: Wave amplification and vibration of structures

    NASA Astrophysics Data System (ADS)

    Lokmane, N.; Semblat, J.-F.; Bonnet, G.; Driad, L.; Duval, A.-M.

    2003-04-01

    A vibration induced by the ground motion, whatever its source is, can in certain cases damage surface structures. The scientific works allowing the analysis of this phenomenon are numerous and well established. However, they generally concern dynamic motion from real earthquakes. The goal of this work is to analyse the impact of shaking induced by mining on the structures located on the surface. The methods allowing to assess the consequences of earthquakes of strong amplitude are well established, when the methodology to estimate the consequences of moderate but frequent dynamic loadings is not well defined. The mining such as the "Houillères de Bassin du Centre et du Midi" (HBCM) involves vibrations which are regularly felt on the surface. An extracting work of coal generates shaking similar to those caused by earthquakes (standard waves and laws of propagation) but of rather low magnitude. On the other hand, their recurrent feature makes the vibrations more harmful. A three-dimensional modeling of standard structure of the site was carried out. The first results show that the fundamental frequencies of this structure are compatible with the amplification measurements carried out on site. The motion amplification in the surface soil layers is then analyzed. The modeling works are performed on the surface soil layers of Gardanne (Provence), where measurements of microtremors were performed. The analysis of H/V spectral ratio (horizontal on vertical component) indeed makes it possible to characterize the fundamental frequencies of the surface soil layers. This experiment also allows to characterize local evolution of amplification induced by the topmost soil layers. The numerical methods we consider to model seismic wave propagation and amplification in the site, is the Boundary Element Methode (BEM) The main advantage of the boundary element method is to get rid of artificial truncations of the mesh (as in Finite Element Method) in the case of infinite medium. For dynamic problems, these truncations lead to spurious wave reflections giving a numerical error in the solution. The experimental and numerical (BEM) results on surface motion amplification are then compared in terms of both amplitude and frequency range.

  8. BOKASUN: A fast and precise numerical program to calculate the Master Integrals of the two-loop sunrise diagrams

    NASA Astrophysics Data System (ADS)

    Caffo, Michele; Czyż, Henryk; Gunia, Michał; Remiddi, Ettore

    2009-03-01

    We present the program BOKASUN for fast and precise evaluation of the Master Integrals of the two-loop self-mass sunrise diagram for arbitrary values of the internal masses and the external four-momentum. We use a combination of two methods: a Bernoulli accelerated series expansion and a Runge-Kutta numerical solution of a system of linear differential equations. Program summaryProgram title: BOKASUN Catalogue identifier: AECG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9404 No. of bytes in distributed program, including test data, etc.: 104 123 Distribution format: tar.gz Programming language: FORTRAN77 Computer: Any computer with a Fortran compiler accepting FORTRAN77 standard. Tested on various PC's with LINUX Operating system: LINUX RAM: 120 kbytes Classification: 4.4 Nature of problem: Any integral arising in the evaluation of the two-loop sunrise Feynman diagram can be expressed in terms of a given set of Master Integrals, which should be calculated numerically. The program provides a fast and precise evaluation method of the Master Integrals for arbitrary (but not vanishing) masses and arbitrary value of the external momentum. Solution method: The integrals depend on three internal masses and the external momentum squared p. The method is a combination of an accelerated expansion in 1/p in its (pretty large!) region of fast convergence and of a Runge-Kutta numerical solution of a system of linear differential equations. Running time: To obtain 4 Master Integrals on PC with 2 GHz processor it takes 3 μs for series expansion with pre-calculated coefficients, 80 μs for series expansion without pre-calculated coefficients, from a few seconds up to a few minutes for Runge-Kutta method (depending on the required accuracy and the values of the physical parameters).

  9. Direct Numerical Simulation of Incompressible Pipe Flow Using a B-Spline Spectral Method

    NASA Technical Reports Server (NTRS)

    Loulou, Patrick; Moser, Robert D.; Mansour, Nagi N.; Cantwell, Brian J.

    1997-01-01

    A numerical method based on b-spline polynomials was developed to study incompressible flows in cylindrical geometries. A b-spline method has the advantages of possessing spectral accuracy and the flexibility of standard finite element methods. Using this method it was possible to ensure regularity of the solution near the origin, i.e. smoothness and boundedness. Because b-splines have compact support, it is also possible to remove b-splines near the center to alleviate the constraint placed on the time step by an overly fine grid. Using the natural periodicity in the azimuthal direction and approximating the streamwise direction as periodic, so-called time evolving flow, greatly reduced the cost and complexity of the computations. A direct numerical simulation of pipe flow was carried out using the method described above at a Reynolds number of 5600 based on diameter and bulk velocity. General knowledge of pipe flow and the availability of experimental measurements make pipe flow the ideal test case with which to validate the numerical method. Results indicated that high flatness levels of the radial component of velocity in the near wall region are physical; regions of high radial velocity were detected and appear to be related to high speed streaks in the boundary layer. Budgets of Reynolds stress transport equations showed close similarity with those of channel flow. However contrary to channel flow, the log layer of pipe flow is not homogeneous for the present Reynolds number. A topological method based on a classification of the invariants of the velocity gradient tensor was used. Plotting iso-surfaces of the discriminant of the invariants proved to be a good method for identifying vortical eddies in the flow field.

  10. Adaptive [theta]-methods for pricing American options

    NASA Astrophysics Data System (ADS)

    Khaliq, Abdul Q. M.; Voss, David A.; Kazmi, Kamran

    2008-12-01

    We develop adaptive [theta]-methods for solving the Black-Scholes PDE for American options. By adding a small, continuous term, the Black-Scholes PDE becomes an advection-diffusion-reaction equation on a fixed spatial domain. Standard implementation of [theta]-methods would require a Newton-type iterative procedure at each time step thereby increasing the computational complexity of the methods. Our linearly implicit approach avoids such complications. We establish a general framework under which [theta]-methods satisfy a discrete version of the positivity constraint characteristic of American options, and numerically demonstrate the sensitivity of the constraint. The positivity results are established for the single-asset and independent two-asset models. In addition, we have incorporated and analyzed an adaptive time-step control strategy to increase the computational efficiency. Numerical experiments are presented for one- and two-asset American options, using adaptive exponential splitting for two-asset problems. The approach is compared with an iterative solution of the two-asset problem in terms of computational efficiency.

  11. Testing the conditional mass function of dark matter haloes against numerical N-body simulations

    NASA Astrophysics Data System (ADS)

    Tramonte, D.; Rubiño-Martín, J. A.; Betancort-Rijo, J.; Dalla Vecchia, C.

    2017-05-01

    We compare the predicted conditional mass function (CMF) of dark matter haloes from two theoretical prescriptions against numerical N-body simulations, both in overdense and underdense regions and at different Eulerian scales ranging from 5 to 30 h-1 Mpc. In particular, we consider in detail a locally implemented rescaling of the unconditional mass function (UMF) already discussed in the literature, and also a generalization of the standard rescaling method described in the extended Press-Schechter formalism. First, we test the consistency of these two rescalings by verifying the normalization of the CMF at different scales, and showing that none of the proposed cases provides a normalized CMF. In order to satisfy the normalization condition, we include a modification in the rescaling procedure. After this modification, the resulting CMF generally provides a better description of numerical results. We finally present an analytical fit to the ratio between the CMF and the UMF (also known as the matter-to-halo bias function) in underdense regions, which could be of special interest to speed up the computation of the halo abundance when studying void statistics. In this case, the CMF prescription based on the locally implemented rescaling provides a slightly better description of the numerical results when compared to the standard rescaling.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bose, Sownak; Li, Baojiu; He, Jian-hua

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f ( R ) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergencemore » rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f ( R ) simulations. For example, a test simulation with 512{sup 3} particles in a box of size 512 Mpc/ h is now 5 times faster than before, while a Millennium-resolution simulation for f ( R ) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.« less

  13. Speeding up N-body simulations of modified gravity: chameleon screening models

    NASA Astrophysics Data System (ADS)

    Bose, Sownak; Li, Baojiu; Barreira, Alexandre; He, Jian-hua; Hellwing, Wojciech A.; Koyama, Kazuya; Llinares, Claudio; Zhao, Gong-Bo

    2017-02-01

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f(R) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f(R) simulations. For example, a test simulation with 5123 particles in a box of size 512 Mpc/h is now 5 times faster than before, while a Millennium-resolution simulation for f(R) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.

  14. A discontinuous Galerkin method for the shallow water equations in spherical triangular coordinates

    NASA Astrophysics Data System (ADS)

    Läuter, Matthias; Giraldo, Francis X.; Handorf, Dörthe; Dethloff, Klaus

    2008-12-01

    A global model of the atmosphere is presented governed by the shallow water equations and discretized by a Runge-Kutta discontinuous Galerkin method on an unstructured triangular grid. The shallow water equations on the sphere, a two-dimensional surface in R3, are locally represented in terms of spherical triangular coordinates, the appropriate local coordinate mappings on triangles. On every triangular grid element, this leads to a two-dimensional representation of tangential momentum and therefore only two discrete momentum equations. The discontinuous Galerkin method consists of an integral formulation which requires both area (elements) and line (element faces) integrals. Here, we use a Rusanov numerical flux to resolve the discontinuous fluxes at the element faces. A strong stability-preserving third-order Runge-Kutta method is applied for the time discretization. The polynomial space of order k on each curved triangle of the grid is characterized by a Lagrange basis and requires high-order quadature rules for the integration over elements and element faces. For the presented method no mass matrix inversion is necessary, except in a preprocessing step. The validation of the atmospheric model has been done considering standard tests from Williamson et al. [D.L. Williamson, J.B. Drake, J.J. Hack, R. Jakob, P.N. Swarztrauber, A standard test set for numerical approximations to the shallow water equations in spherical geometry, J. Comput. Phys. 102 (1992) 211-224], unsteady analytical solutions of the nonlinear shallow water equations and a barotropic instability caused by an initial perturbation of a jet stream. A convergence rate of O(Δx) was observed in the model experiments. Furthermore, a numerical experiment is presented, for which the third-order time-integration method limits the model error. Thus, the time step Δt is restricted by both the CFL-condition and accuracy demands. Conservation of mass was shown up to machine precision and energy conservation converges for both increasing grid resolution and increasing polynomial order k.

  15. The Standard for Clinicians’ Interview in Psychiatry (SCIP): A Clinician-administered Tool with Categorical, Dimensional, and Numeric Output—Conceptual Development, Design, and Description of the SCIP

    PubMed Central

    Nasrallah, Henry; Muvvala, Srinivas; El-Missiry, Ahmed; Mansour, Hader; Hill, Cheryl; Elswick, Daniel; Price, Elizabeth C.

    2016-01-01

    Existing standardized diagnostic interviews (SDIs) were designed for researchers and produce mainly categorical diagnoses. There is an urgent need for a clinician-administered tool that produces dimensional measures, in addition to categorical diagnoses. The Standard for Clinicians’ Interview in Psychiatry (SCIP) is a method of assessment of psychopathology for adults. It is designed to be administered by clinicians and includes the SCIP manual and the SCIP interview. Clinicians use the SCIP questions and rate the responses according to the SCIP manual rules. Clinicians use the patient’s responses to questions, observe the patient’s behaviors and make the final rating of the various signs and symptoms assessed. The SCIP method of psychiatric assessment has three components: 1) the SCIP interview (dimensional) component, 2) the etiological component, and 3) the disorder classification component. The SCIP produces three main categories of clinical data: 1) a diagnostic classification of psychiatric disorders, 2) dimensional scores, and 3) numeric data. The SCIP provides diagnoses consistent with criteria from editions of the Diagnostic and Statistical Manual (DSM) and International Classification of Disease (ICD). The SCIP produces 18 dimensional measures for key psychiatric signs or symptoms: anxiety, posttraumatic stress, obsessions, compulsions, depression, mania, suicidality, suicidal behavior, delusions, hallucinations, agitation, disorganized behavior, negativity, catatonia, alcohol addiction, drug addiction, attention, and hyperactivity. The SCIP produces numeric severity data for use in either clinical care or research. The SCIP was shown to be a valid and reliable assessment tool, and the validity and reliability results were published in 2014 and 2015. The SCIP is compatible with personalized psychiatry research and is in line with the Research Domain Criteria framework. PMID:27800284

  16. Self-Consistent Chaotic Transport in a High-Dimensional Mean-Field Hamiltonian Map Model

    DOE PAGES

    Martínez-del-Río, D.; del-Castillo-Negrete, D.; Olvera, A.; ...

    2015-10-30

    We studied the self-consistent chaotic transport in a Hamiltonian mean-field model. This model provides a simplified description of transport in marginally stable systems including vorticity mixing in strong shear flows and electron dynamics in plasmas. Self-consistency is incorporated through a mean-field that couples all the degrees-of-freedom. The model is formulated as a large set of N coupled standard-like area-preserving twist maps in which the amplitude and phase of the perturbation, rather than being constant like in the standard map, are dynamical variables. Of particular interest is the study of the impact of periodic orbits on the chaotic transport and coherentmore » structures. Furthermore, numerical simulations show that self-consistency leads to the formation of a coherent macro-particle trapped around the elliptic fixed point of the system that appears together with an asymptotic periodic behavior of the mean field. To model this asymptotic state, we introduced a non-autonomous map that allows a detailed study of the onset of global transport. A turnstile-type transport mechanism that allows transport across instantaneous KAM invariant circles in non-autonomous systems is discussed. As a first step to understand transport, we study a special type of orbits referred to as sequential periodic orbits. Using symmetry properties we show that, through replication, high-dimensional sequential periodic orbits can be generated starting from low-dimensional periodic orbits. We show that sequential periodic orbits in the self-consistent map can be continued from trivial (uncoupled) periodic orbits of standard-like maps using numerical and asymptotic methods. Normal forms are used to describe these orbits and to find the values of the map parameters that guarantee their existence. Numerical simulations are used to verify the prediction from the asymptotic methods.« less

  17. The Standard for Clinicians' Interview in Psychiatry (SCIP): A Clinician-administered Tool with Categorical, Dimensional, and Numeric Output-Conceptual Development, Design, and Description of the SCIP.

    PubMed

    Aboraya, Ahmed; Nasrallah, Henry; Muvvala, Srinivas; El-Missiry, Ahmed; Mansour, Hader; Hill, Cheryl; Elswick, Daniel; Price, Elizabeth C

    2016-01-01

    Existing standardized diagnostic interviews (SDIs) were designed for researchers and produce mainly categorical diagnoses. There is an urgent need for a clinician-administered tool that produces dimensional measures, in addition to categorical diagnoses. The Standard for Clinicians' Interview in Psychiatry (SCIP) is a method of assessment of psychopathology for adults. It is designed to be administered by clinicians and includes the SCIP manual and the SCIP interview. Clinicians use the SCIP questions and rate the responses according to the SCIP manual rules. Clinicians use the patient's responses to questions, observe the patient's behaviors and make the final rating of the various signs and symptoms assessed. The SCIP method of psychiatric assessment has three components: 1) the SCIP interview (dimensional) component, 2) the etiological component, and 3) the disorder classification component. The SCIP produces three main categories of clinical data: 1) a diagnostic classification of psychiatric disorders, 2) dimensional scores, and 3) numeric data. The SCIP provides diagnoses consistent with criteria from editions of the Diagnostic and Statistical Manual (DSM) and International Classification of Disease (ICD). The SCIP produces 18 dimensional measures for key psychiatric signs or symptoms: anxiety, posttraumatic stress, obsessions, compulsions, depression, mania, suicidality, suicidal behavior, delusions, hallucinations, agitation, disorganized behavior, negativity, catatonia, alcohol addiction, drug addiction, attention, and hyperactivity. The SCIP produces numeric severity data for use in either clinical care or research. The SCIP was shown to be a valid and reliable assessment tool, and the validity and reliability results were published in 2014 and 2015. The SCIP is compatible with personalized psychiatry research and is in line with the Research Domain Criteria framework.

  18. Self-Consistent Chaotic Transport in a High-Dimensional Mean-Field Hamiltonian Map Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martínez-del-Río, D.; del-Castillo-Negrete, D.; Olvera, A.

    We studied the self-consistent chaotic transport in a Hamiltonian mean-field model. This model provides a simplified description of transport in marginally stable systems including vorticity mixing in strong shear flows and electron dynamics in plasmas. Self-consistency is incorporated through a mean-field that couples all the degrees-of-freedom. The model is formulated as a large set of N coupled standard-like area-preserving twist maps in which the amplitude and phase of the perturbation, rather than being constant like in the standard map, are dynamical variables. Of particular interest is the study of the impact of periodic orbits on the chaotic transport and coherentmore » structures. Furthermore, numerical simulations show that self-consistency leads to the formation of a coherent macro-particle trapped around the elliptic fixed point of the system that appears together with an asymptotic periodic behavior of the mean field. To model this asymptotic state, we introduced a non-autonomous map that allows a detailed study of the onset of global transport. A turnstile-type transport mechanism that allows transport across instantaneous KAM invariant circles in non-autonomous systems is discussed. As a first step to understand transport, we study a special type of orbits referred to as sequential periodic orbits. Using symmetry properties we show that, through replication, high-dimensional sequential periodic orbits can be generated starting from low-dimensional periodic orbits. We show that sequential periodic orbits in the self-consistent map can be continued from trivial (uncoupled) periodic orbits of standard-like maps using numerical and asymptotic methods. Normal forms are used to describe these orbits and to find the values of the map parameters that guarantee their existence. Numerical simulations are used to verify the prediction from the asymptotic methods.« less

  19. Low Dissipative High Order Shock-Capturing Methods Using Characteristic-Based Filters

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sandham, N. D.; Djomehri, M. J.

    1998-01-01

    An approach which closely maintains the non-dissipative nature of classical fourth or higher- order spatial differencing away from shock waves and steep gradient regions while being capable of accurately capturing discontinuities, steep gradient and fine scale turbulent structures in a stable and efficient manner is described. The approach is a generalization of the method of Gustafsson and Oisson and the artificial compression method (ACM) of Harten. Spatially non-dissipative fourth or higher-order compact and non-compact spatial differencings are used as the base schemes. Instead of applying a scalar filter as in Gustafsson and Olsson, an ACM like term is used to signal the appropriate amount of second or third-order TVD or ENO types of characteristic based numerical dissipation. This term acts as a characteristic filter to minimize numerical dissipation for the overall scheme. For time-accurate computations, time discretizations with low dissipation are used. Numerical experiments on 2-D vortical flows, vortex-shock interactions and compressible spatially and temporally evolving mixing layers showed that the proposed schemes have the desired property with only a 10% increase in operations count over standard second-order TVD schemes. Aside from the ability to accurately capture shock-turbulence interaction flows, this approach is also capable of accurately preserving vortex convection. Higher accuracy is achieved with fewer grid points when compared to that of standard second-order TVD or ENO schemes. To demonstrate the applicability of these schemes in sustaining turbulence where shock waves are absent, a simulation of 3-D compressible turbulent channel flow in a small domain is conducted.

  20. Low Dissipative High Order Shock-Capturing Methods using Characteristic-Based Filters

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sandham, N. D.; Djomehri, M. J.

    1998-01-01

    An approach which closely maintains the non-dissipative nature of classical fourth or higher- order spatial differencing away from shock waves and steep gradient regions while being capable of accurately capturing discontinuities, steep gradient and fine scale turbulent structures in a stable and efficient manner is described. The approach is a generalization of the method of Gustafsson and Olsson and the artificial compression method (ACM) of Harten. Spatially non-dissipative fourth or higher-order compact and non-compact spatial differencings are used as the base schemes. Instead of applying a scalar filter as in Gustafsson and Olsson, an ACM like term is used to signal the appropriate amount of second or third-order TVD or ENO types of characteristic based numerical dissipation. This term acts as a characteristic filter to minimize numerical dissipation for the overall scheme. For time-accurate computations, time discretizations with low dissipation are used. Numerical experiments on 2-D vortical flows, vortex-shock interactions and compressible spatially and temporally evolving mixing layers showed that the proposed schemes have the desired property with only a 10% increase in operations count over standard second-order TVD schemes. Aside from the ability to accurately capture shock-turbulence interaction flows, this approach is also capable of accurately preserving vortex convection. Higher accuracy is achieved with fewer grid points when compared to that of standard second-order TVD or ENO schemes. To demonstrate the applicability of these schemes in sustaining turbulence where shock waves are absent, a simulation of 3-D compressible turbulent channel flow in a small domain is conducted.

  1. System Simulation by Recursive Feedback: Coupling A Set of Stand-Alone Subsystem Simulations

    NASA Technical Reports Server (NTRS)

    Nixon, Douglas D.; Hanson, John M. (Technical Monitor)

    2002-01-01

    Recursive feedback is defined and discussed as a framework for development of specific algorithms and procedures that propagate the time-domain solution for a dynamical system simulation consisting of multiple numerically coupled self-contained stand-alone subsystem simulations. A satellite motion example containing three subsystems (other dynamics, attitude dynamics, and aerodynamics) has been defined and constructed using this approach. Conventional solution methods are used in the subsystem simulations. Centralized and distributed versions of coupling structure have been addressed. Numerical results are evaluated by direct comparison with a standard total-system simultaneous-solution approach.

  2. Double row equivalent for rotator cuff repair: A biomechanical analysis of a new technique.

    PubMed

    Robinson, Sean; Krigbaum, Henry; Kramer, Jon; Purviance, Connor; Parrish, Robin; Donahue, Joseph

    2018-06-01

    There are numerous configurations of double row fixation for rotator cuff tears however, there remains to be a consensus on the best method. In this study, we evaluated three different double-row configurations, including a new method. Our primary question is whether the new anchor and technique compares in biomechanical strength to standard double row techniques. Eighteen prepared fresh frozen bovine infraspinatus tendons were randomized to one of three groups including the New Double Row Equivalent, Arthrex Speedbridge and a transosseous equivalent using standard Stabilynx anchors. Biomechanical testing was performed on humeri sawbones and ultimate load, strain, yield strength, contact area, contact pressure, and a survival plots were evaluated. The new double row equivalent method demonstrated increased survival as well as ultimate strength at 415N compared to the remainder testing groups as well as equivalent contact area and pressure to standard double row techniques. This new anchor system and technique demonstrated higher survival rates and loads to failure than standard double row techniques. This data provides us with a new method of rotator cuff fixation which should be further evaluated in the clinical setting. Basic science biomechanical study.

  3. Hierarchical matrices implemented into the boundary integral approaches for gravity field modelling

    NASA Astrophysics Data System (ADS)

    Čunderlík, Róbert; Vipiana, Francesca

    2017-04-01

    Boundary integral approaches applied for gravity field modelling have been recently developed to solve the geodetic boundary value problems numerically, or to process satellite observations, e.g. from the GOCE satellite mission. In order to obtain numerical solutions of "cm-level" accuracy, such approaches require very refined level of the disretization or resolution. This leads to enormous memory requirements that need to be reduced. An implementation of the Hierarchical Matrices (H-matrices) can significantly reduce a numerical complexity of these approaches. A main idea of the H-matrices is based on an approximation of the entire system matrix that is split into a family of submatrices. Large submatrices are stored in factorized representation, while small submatrices are stored in standard representation. This allows reducing memory requirements significantly while improving the efficiency. The poster presents our preliminary results of implementations of the H-matrices into the existing boundary integral approaches based on the boundary element method or the method of fundamental solution.

  4. Numerically stable finite difference simulation for ultrasonic NDE in anisotropic composites

    NASA Astrophysics Data System (ADS)

    Leckey, Cara A. C.; Quintanilla, Francisco Hernando; Cole, Christina M.

    2018-04-01

    Simulation tools can enable optimized inspection of advanced materials and complex geometry structures. Recent work at NASA Langley is focused on the development of custom simulation tools for modeling ultrasonic wave behavior in composite materials. Prior work focused on the use of a standard staggered grid finite difference type of mathematical approach, by implementing a three-dimensional (3D) anisotropic Elastodynamic Finite Integration Technique (EFIT) code. However, observations showed that the anisotropic EFIT method displays numerically unstable behavior at the locations of stress-free boundaries for some cases of anisotropic materials. This paper gives examples of the numerical instabilities observed for EFIT and discusses the source of instability. As an alternative to EFIT, the 3D Lebedev Finite Difference (LFD) method has been implemented. The paper briefly describes the LFD approach and shows examples of stable behavior in the presence of stress-free boundaries for a monoclinic anisotropy case. The LFD results are also compared to experimental results and dispersion curves.

  5. A DG approach to the numerical solution of the Stein-Stein stochastic volatility option pricing model

    NASA Astrophysics Data System (ADS)

    Hozman, J.; Tichý, T.

    2017-12-01

    Stochastic volatility models enable to capture the real world features of the options better than the classical Black-Scholes treatment. Here we focus on pricing of European-style options under the Stein-Stein stochastic volatility model when the option value depends on the time, on the price of the underlying asset and on the volatility as a function of a mean reverting Orstein-Uhlenbeck process. A standard mathematical approach to this model leads to the non-stationary second-order degenerate partial differential equation of two spatial variables completed by the system of boundary and terminal conditions. In order to improve the numerical valuation process for a such pricing equation, we propose a numerical technique based on the discontinuous Galerkin method and the Crank-Nicolson scheme. Finally, reference numerical experiments on real market data illustrate comprehensive empirical findings on options with stochastic volatility.

  6. Spectral Elements Analysis for Viscoelastic Fluids at High Weissenberg Number Using Logarithmic conformation Tensor Model

    NASA Astrophysics Data System (ADS)

    Jafari, Azadeh; Deville, Michel O.; Fiétier, Nicolas

    2008-09-01

    This study discusses the capability of the constitutive laws for the matrix logarithm of the conformation tensor (LCT model) within the framework of the spectral elements method. The high Weissenberg number problems (HWNP) usually produce a lack of convergence of the numerical algorithms. Even though the question whether the HWNP is a purely numerical problem or rather a breakdown of the constitutive law of the model has remained somewhat of a mystery, it has been recognized that the selection of an appropriate constitutive equation constitutes a very crucial step although implementing a suitable numerical technique is still important for successful discrete modeling of non-Newtonian flows. The LCT model formulation of the viscoelastic equations originally suggested by Fattal and Kupferman is applied for 2-dimensional (2D) FENE-CR model. The Planar Poiseuille flow is considered as a benchmark problem to test this representation at high Weissenberg number. The numerical results are compared with numerical solution of the standard constitutive equation.

  7. Dynamics of inductors for heating of the metal under deformation

    NASA Astrophysics Data System (ADS)

    Zimin, L. S.; Yeghiazaryan, A. S.; Protsenko, A. N.

    2018-01-01

    Current issues of creating powerful systems for hot sheet rolling with induction heating application in mechanical engineering and metallurgy were discussed. Electrodynamical and vibroacoustic problems occurring due to the induction heating of objects with complex shapes, particularly the slabs heating prior to rolling, were analysed. The numerical mathematical model using the method of related contours and the principle of virtual displacements is recommended for electrodynamical calculations. For the numerical solution of the vibrational problem, it is reasonable to use the finite element method (FEM). In general, for calculating the distribution forces, the law of Biot-Savart-Laplace method providing the determination of the current density of the skin layer in slab was used. The form of the optimal design of the inductor based on maximum hardness was synthesized while researching the vibrodynamic model of the system "inductor-metal" which provided allowable sound level meeting all established sanitary standards.

  8. Some problems in applications of the linear variational method

    NASA Astrophysics Data System (ADS)

    Pupyshev, Vladimir I.; Montgomery, H. E.

    2015-09-01

    The linear variational method is a standard computational method in quantum mechanics and quantum chemistry. As taught in most classes, the general guidance is to include as many basis functions as practical in the variational wave function. However, if it is desired to study the patterns of energy change accompanying the change of system parameters such as the shape and strength of the potential energy, the problem becomes more complicated. We use one-dimensional systems with a particle in a rectangular or in a harmonic potential confined in an infinite rectangular box to illustrate situations where a variational calculation can give incorrect results. These situations result when the energy of the lowest eigenvalue is strongly dependent on the parameters that describe the shape and strength of the potential. The numerical examples described in this work are provided as cautionary notes for practitioners of numerical variational calculations.

  9. Conjugate gradient type methods for linear systems with complex symmetric coefficient matrices

    NASA Technical Reports Server (NTRS)

    Freund, Roland

    1989-01-01

    We consider conjugate gradient type methods for the solution of large sparse linear system Ax equals b with complex symmetric coefficient matrices A equals A(T). Such linear systems arise in important applications, such as the numerical solution of the complex Helmholtz equation. Furthermore, most complex non-Hermitian linear systems which occur in practice are actually complex symmetric. We investigate conjugate gradient type iterations which are based on a variant of the nonsymmetric Lanczos algorithm for complex symmetric matrices. We propose a new approach with iterates defined by a quasi-minimal residual property. The resulting algorithm presents several advantages over the standard biconjugate gradient method. We also include some remarks on the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.

  10. Incompressible Navier-Stokes Computations with Heat Transfer

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan; Rogers, Stuart; Kutler, Paul (Technical Monitor)

    1994-01-01

    The existing pseudocompressibility method for the system of incompressible Navier-Stokes equations is extended to heat transfer problems by including the energy equation. The solution method is based on the pseudo compressibility approach and uses an implicit-upwind differencing scheme together with the Gauss-Seidel line relaxation method. Current computations use one-equation Baldwin-Barth turbulence model which is derived from a simplified form of the standard k-epsilon model equations. Both forced and natural convection problems are examined. Numerical results from turbulent reattaching flow behind a backward-facing step will be compared against experimental measurements for the forced convection case. The validity of Boussinesq approximation to simplify the buoyancy force term will be investigated. The natural convective flow structure generated by heat transfer in a vertical rectangular cavity will be studied. The numerical results will be compared by experimental measurements by Morrison and Tran.

  11. Traceable Coulomb blockade thermometry

    NASA Astrophysics Data System (ADS)

    Hahtela, O.; Mykkänen, E.; Kemppinen, A.; Meschke, M.; Prunnila, M.; Gunnarsson, D.; Roschier, L.; Penttilä, J.; Pekola, J.

    2017-02-01

    We present a measurement and analysis scheme for determining traceable thermodynamic temperature at cryogenic temperatures using Coulomb blockade thermometry. The uncertainty of the electrical measurement is improved by utilizing two sampling digital voltmeters instead of the traditional lock-in technique. The remaining uncertainty is dominated by that of the numerical analysis of the measurement data. Two analysis methods are demonstrated: numerical fitting of the full conductance curve and measuring the height of the conductance dip. The complete uncertainty analysis shows that using either analysis method the relative combined standard uncertainty (k  =  1) in determining the thermodynamic temperature in the temperature range from 20 mK to 200 mK is below 0.5%. In this temperature range, both analysis methods produced temperature estimates that deviated from 0.39% to 0.67% from the reference temperatures provided by a superconducting reference point device calibrated against the Provisional Low Temperature Scale of 2000.

  12. Adaptive eigenspace method for inverse scattering problems in the frequency domain

    NASA Astrophysics Data System (ADS)

    Grote, Marcus J.; Kray, Marie; Nahum, Uri

    2017-02-01

    A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.

  13. Dementia and Mortality In Persons with Down's Syndrome

    ERIC Educational Resources Information Center

    Coppus, A.; Evenhuis, H.; Verberne, G.-J.; Visser, F.; van Gool, P.; Eikelenboom, P.; van Duijin, C.

    2006-01-01

    Background: Numerous studies have documented that persons with Downs syndrome (DS) are at an increased risk of Alzheimers disease (AD). However, at present it is still not clear whether or not all persons with DS will develop dementia as they reach old age. Methods: We studied 506 people with DS, aged 45 years and above. A standardized assessment…

  14. Classroom versus Computer-Based CPR Training: A Comparison of the Effectiveness of Two Instructional Methods

    ERIC Educational Resources Information Center

    Rehberg, Robb S.; Gazzillo Diaz, Linda; Middlemas, David A.

    2009-01-01

    Objective: The objective of this study was to determine whether computer-based CPR training is comparable to traditional classroom training. Design and Setting: This study was quantitative in design. Data was gathered from a standardized examination and skill performance evaluation which yielded numerical scores. Subjects: The subjects were 64…

  15. Using Least Squares for Error Propagation

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2015-01-01

    The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…

  16. Network-induced chaos in integrate-and-fire neuronal ensembles.

    PubMed

    Zhou, Douglas; Rangan, Aaditya V; Sun, Yi; Cai, David

    2009-09-01

    It has been shown that a single standard linear integrate-and-fire (IF) neuron under a general time-dependent stimulus cannot possess chaotic dynamics despite the firing-reset discontinuity. Here we address the issue of whether conductance-based, pulsed-coupled network interactions can induce chaos in an IF neuronal ensemble. Using numerical methods, we demonstrate that all-to-all, homogeneously pulse-coupled IF neuronal networks can indeed give rise to chaotic dynamics under an external periodic current drive. We also provide a precise characterization of the largest Lyapunov exponent for these high dimensional nonsmooth dynamical systems. In addition, we present a stable and accurate numerical algorithm for evaluating the largest Lyapunov exponent, which can overcome difficulties encountered by traditional methods for these nonsmooth dynamical systems with degeneracy induced by, e.g., refractoriness of neurons.

  17. Numerical model of the polymer electro-optic waveguide

    NASA Astrophysics Data System (ADS)

    Fan, Guofang; Li, Yuan; Han, Bing; Wang, Qi; Liu, Xinhou; Zhen, Zhen

    2012-09-01

    A numerical design model is presented for the polymer waveguide in an electro-optic modulator. The effective index method is used to analyze the height of the core waveguide and rib waveguide, an improved Marcatili method is presented to design the rib waveguide width in order to keep the strong single mode operation and have a good match with the standard fiber. Also, the thickness of the upper cladding layer is discussed through calculating the effective index of the multilayer planar waveguide structure has been obtained by setting the optical loss due to the metallic absorption to an acceptable value (<0.1 dB/cm). As a consequence, we take the EO polymer waveguide structure of UV15:CLD/APC:UFC170 as an example, an optimized design is reported.

  18. A generic standard additions based method to determine endogenous analyte concentrations by immunoassays to overcome complex biological matrix interference.

    PubMed

    Pang, Susan; Cowen, Simon

    2017-12-13

    We describe a novel generic method to derive the unknown endogenous concentrations of analyte within complex biological matrices (e.g. serum or plasma) based upon the relationship between the immunoassay signal response of a biological test sample spiked with known analyte concentrations and the log transformed estimated total concentration. If the estimated total analyte concentration is correct, a portion of the sigmoid on a log-log plot is very close to linear, allowing the unknown endogenous concentration to be estimated using a numerical method. This approach obviates conventional relative quantification using an internal standard curve and need for calibrant diluent, and takes into account the individual matrix interference on the immunoassay by spiking the test sample itself. This technique is based on standard additions for chemical analytes. Unknown endogenous analyte concentrations within even 2-fold diluted human plasma may be determined reliably using as few as four reaction wells.

  19. Navigation strategy and filter design for solar electric missions

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Hagar, H., Jr.

    1972-01-01

    Methods which have been proposed to improve the navigation accuracy for the low-thrust space vehicle include modifications to the standard Sequential- and Batch-type orbit determination procedures and the use of inertial measuring units (IMU) which measures directly the acceleration applied to the vehicle. The navigation accuracy obtained using one of the more promising modifications to the orbit determination procedures is compared with a combined IMU-Standard. The unknown accelerations are approximated as both first-order and second-order Gauss-Markov processes. The comparison is based on numerical results obtained in a study of the navigation requirements of a numerically simulated 152-day low-thrust mission to the asteroid Eros. The results obtained in the simulation indicate that the DMC algorithm will yield a significant improvement over the navigation accuracies achieved with previous estimation algorithms. In addition, the DMC algorithms will yield better navigation accuracies than the IMU-Standard Orbit Determination algorithm, except for extremely precise IMU measurements, i.e., gyroplatform alignment .01 deg and accelerometer signal-to-noise ratio .07. Unless these accuracies are achieved, the IMU navigation accuracies are generally unacceptable.

  20. Particle-based and meshless methods with Aboria

    NASA Astrophysics Data System (ADS)

    Robinson, Martin; Bruna, Maria

    Aboria is a powerful and flexible C++ library for the implementation of particle-based numerical methods. The particles in such methods can represent actual particles (e.g. Molecular Dynamics) or abstract particles used to discretise a continuous function over a domain (e.g. Radial Basis Functions). Aboria provides a particle container, compatible with the Standard Template Library, spatial search data structures, and a Domain Specific Language to specify non-linear operators on the particle set. This paper gives an overview of Aboria's design, an example of use, and a performance benchmark.

  1. Measurement Uncertainty of Dew-Point Temperature in a Two-Pressure Humidity Generator

    NASA Astrophysics Data System (ADS)

    Martins, L. Lages; Ribeiro, A. Silva; Alves e Sousa, J.; Forbes, Alistair B.

    2012-09-01

    This article describes the measurement uncertainty evaluation of the dew-point temperature when using a two-pressure humidity generator as a reference standard. The estimation of the dew-point temperature involves the solution of a non-linear equation for which iterative solution techniques, such as the Newton-Raphson method, are required. Previous studies have already been carried out using the GUM method and the Monte Carlo method but have not discussed the impact of the approximate numerical method used to provide the temperature estimation. One of the aims of this article is to take this approximation into account. Following the guidelines presented in the GUM Supplement 1, two alternative approaches can be developed: the forward measurement uncertainty propagation by the Monte Carlo method when using the Newton-Raphson numerical procedure; and the inverse measurement uncertainty propagation by Bayesian inference, based on prior available information regarding the usual dispersion of values obtained by the calibration process. The measurement uncertainties obtained using these two methods can be compared with previous results. Other relevant issues concerning this research are the broad application to measurements that require hygrometric conditions obtained from two-pressure humidity generators and, also, the ability to provide a solution that can be applied to similar iterative models. The research also studied the factors influencing both the use of the Monte Carlo method (such as the seed value and the convergence parameter) and the inverse uncertainty propagation using Bayesian inference (such as the pre-assigned tolerance, prior estimate, and standard deviation) in terms of their accuracy and adequacy.

  2. Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries.

    PubMed

    Shafiey, Hassan; Gan, Xinjun; Waxman, David

    2017-11-01

    To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.

  3. Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries

    NASA Astrophysics Data System (ADS)

    Shafiey, Hassan; Gan, Xinjun; Waxman, David

    2017-11-01

    To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.

  4. Gradient augmented level set method for phase change simulations

    NASA Astrophysics Data System (ADS)

    Anumolu, Lakshman; Trujillo, Mario F.

    2018-01-01

    A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.

  5. Recursive regularization step for high-order lattice Boltzmann methods

    NASA Astrophysics Data System (ADS)

    Coreixas, Christophe; Wissocq, Gauthier; Puigt, Guillaume; Boussuge, Jean-François; Sagaut, Pierre

    2017-09-01

    A lattice Boltzmann method (LBM) with enhanced stability and accuracy is presented for various Hermite tensor-based lattice structures. The collision operator relies on a regularization step, which is here improved through a recursive computation of nonequilibrium Hermite polynomial coefficients. In addition to the reduced computational cost of this procedure with respect to the standard one, the recursive step allows to considerably enhance the stability and accuracy of the numerical scheme by properly filtering out second- (and higher-) order nonhydrodynamic contributions in under-resolved conditions. This is first shown in the isothermal case where the simulation of the doubly periodic shear layer is performed with a Reynolds number ranging from 104 to 106, and where a thorough analysis of the case at Re=3 ×104 is conducted. In the latter, results obtained using both regularization steps are compared against the Bhatnagar-Gross-Krook LBM for standard (D2Q9) and high-order (D2V17 and D2V37) lattice structures, confirming the tremendous increase of stability range of the proposed approach. Further comparisons on thermal and fully compressible flows, using the general extension of this procedure, are then conducted through the numerical simulation of Sod shock tubes with the D2V37 lattice. They confirm the stability increase induced by the recursive approach as compared with the standard one.

  6. A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries.

    PubMed

    Ge, Liang; Sotiropoulos, Fotis

    2007-08-01

    A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [1]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus.

  7. A Numerical Method for Solving the 3D Unsteady Incompressible Navier-Stokes Equations in Curvilinear Domains with Complex Immersed Boundaries

    PubMed Central

    Ge, Liang; Sotiropoulos, Fotis

    2008-01-01

    A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [1]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus. PMID:19194533

  8. Computation of Sound Propagation by Boundary Element Method

    NASA Technical Reports Server (NTRS)

    Guo, Yueping

    2005-01-01

    This report documents the development of a Boundary Element Method (BEM) code for the computation of sound propagation in uniform mean flows. The basic formulation and implementation follow the standard BEM methodology; the convective wave equation and the boundary conditions on the surfaces of the bodies in the flow are formulated into an integral equation and the method of collocation is used to discretize this equation into a matrix equation to be solved numerically. New features discussed here include the formulation of the additional terms due to the effects of the mean flow and the treatment of the numerical singularities in the implementation by the method of collocation. The effects of mean flows introduce terms in the integral equation that contain the gradients of the unknown, which is undesirable if the gradients are treated as additional unknowns, greatly increasing the sizes of the matrix equation, or if numerical differentiation is used to approximate the gradients, introducing numerical error in the computation. It is shown that these terms can be reformulated in terms of the unknown itself, making the integral equation very similar to the case without mean flows and simple for numerical implementation. To avoid asymptotic analysis in the treatment of numerical singularities in the method of collocation, as is conventionally done, we perform the surface integrations in the integral equation by using sub-triangles so that the field point never coincide with the evaluation points on the surfaces. This simplifies the formulation and greatly facilitates the implementation. To validate the method and the code, three canonic problems are studied. They are respectively the sound scattering by a sphere, the sound reflection by a plate in uniform mean flows and the sound propagation over a hump of irregular shape in uniform flows. The first two have analytical solutions and the third is solved by the method of Computational Aeroacoustics (CAA), all of which are used to compare the BEM solutions. The comparisons show very good agreements and validate the accuracy of the BEM approach implemented here.

  9. New Numerical Approaches To thermal Convection In A Compositionally Stratified Fluid

    NASA Astrophysics Data System (ADS)

    Puckett, E. G.; Turcotte, D. L.; Kellogg, L. H.; Lokavarapu, H. V.; He, Y.; Robey, J.

    2016-12-01

    Seismic imaging of the mantle has revealed large and small scale heterogeneities in the lower mantle; specifically structures known as large low shear velocity provinces (LLSVP) below Africa and the South Pacific. Most interpretations propose that the heterogeneities are compositional in nature, differing from the overlying mantle, an interpretation that would be consistent with chemical geodynamic models. The LLSVP's are thought to be very old, meaning they have persisted thoughout much of Earth's history. Numerical modeling of persistent compositional interfaces present challenges to even state-of-the-art numerical methodology. It is extremely difficult to maintain sharp composition boundaries which migrate and distort with time dependent fingering without compositional diffusion and / or artificial diffusion. The compositional boundary must persist indefinitely. In this work we present computations of an initial compositionally stratified fluid that is subject to a thermal gradient ΔT = T1 - T0 across the height D of a rectangular domain over a range of buoyancy numbers B and Rayleigh numbers Ra. In these computations we compare three numerical approaches to modeling the movement of two distinct, thermally driven, compositional fields; namely, a high-order Finte Element Method (FEM) that employs artifical viscosity to preserve the maximum and minimum values of the compositional field, a Discontinous Galerkin (DG) method with a Bound Preserving (BP) limiter, and a Volume-of-Fluid (VOF) interface tracking algorithm. Our computations demonstrate that the FEM approach has far too much numerical diffusion to yield meaningful results, the DGBP method yields much better resuts but with small amounts of each compositional field being (numerically) entrained within the other compositional field, while the VOF method maintains a sharp interface between the two compositions throughout the computation. In the figure we show a comparison of between the three methods for a computation made with B = 1.111 and Ra = 10,000 after the flow has reached 'steady state'. (R) the images computed with the standard FEM method (with artifical viscosity), (C) the images computed with the DGBP method (with no artifical viscosity or diffusion due to discretization errors) and (L) the images computed with the VOF algorithm.

  10. Quantum chemical approach for condensed-phase thermochemistry (V): Development of rigid-body type harmonic solvation model

    NASA Astrophysics Data System (ADS)

    Tarumi, Moto; Nakai, Hiromi

    2018-05-01

    This letter proposes an approximate treatment of the harmonic solvation model (HSM) assuming the solute to be a rigid body (RB-HSM). The HSM method can appropriately estimate the Gibbs free energy for condensed phases even where an ideal gas model used by standard quantum chemical programs fails. The RB-HSM method eliminates calculations for intra-molecular vibrations in order to reduce the computational costs. Numerical assessments indicated that the RB-HSM method can evaluate entropies and internal energies with the same accuracy as the HSM method but with lower calculation costs.

  11. Method for obtaining electron energy-density functions from Langmuir-probe data using a card-programmable calculator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Longhurst, G.R.

    This paper presents a method for obtaining electron energy density functions from Langmuir probe data taken in cool, dense plasmas where thin-sheath criteria apply and where magnetic effects are not severe. Noise is filtered out by using regression of orthogonal polynomials. The method requires only a programmable calculator (TI-59 or equivalent) to implement and can be used for the most general, nonequilibrium electron energy distribution plasmas. Data from a mercury ion source analyzed using this method are presented and compared with results for the same data using standard numerical techniques.

  12. Side-branch resonators modelling with Green's function methods

    NASA Astrophysics Data System (ADS)

    Perrey-Debain, E.; Maréchal, R.; Ville, J. M.

    2014-09-01

    This paper deals with strategies for computing efficiently the propagation of sound waves in ducts containing passive components. In many cases of practical interest, these components are acoustic cavities which are connected to the duct. Though standard Finite Element software could be used for the numerical prediction of sound transmission through such a system, the method is known to be extremely demanding, both in terms of data preparation and computation, especially in the mid-frequency range. To alleviate this, a numerical technique that exploits the benefit of the FEM and the BEM approach has been devised. First, a set of eigenmodes is computed in the cavity to produce a numerical impedance matrix connecting the pressure and the acoustic velocity on the duct wall interface. Then an integral representation for the acoustic pressure in the main duct is used. By choosing an appropriate Green's function for the duct, the integration procedure is limited to the duct-cavity interface only. This allows an accurate computation of the scattering matrix of such an acoustic system with a numerical complexity that grows very mildly with the frequency. Typical applications involving Helmholtz and Herschel-Quincke resonators are presented.

  13. Numerical simulation of conservation laws

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; To, Wai-Ming

    1992-01-01

    A new numerical framework for solving conservation laws is being developed. This new approach differs substantially from the well established methods, i.e., finite difference, finite volume, finite element and spectral methods, in both concept and methodology. The key features of the current scheme include: (1) direct discretization of the integral forms of conservation laws, (2) treating space and time on the same footing, (3) flux conservation in space and time, and (4) unified treatment of the convection and diffusion fluxes. The model equation considered in the initial study is the standard one dimensional unsteady constant-coefficient convection-diffusion equation. In a stability study, it is shown that the principal and spurious amplification factors of the current scheme, respectively, are structurally similar to those of the leapfrog/DuFort-Frankel scheme. As a result, the current scheme has no numerical diffusion in the special case of pure convection and is unconditionally stable in the special case of pure diffusion. Assuming smooth initial data, it will be shown theoretically and numerically that, by using an easily determined optimal time step, the accuracy of the current scheme may reach a level which is several orders of magnitude higher than that of the MacCormack scheme, with virtually identical operation count.

  14. A comparative analysis of numerical approaches to the mechanics of elastic sheets

    NASA Astrophysics Data System (ADS)

    Taylor, Michael; Davidovitch, Benny; Qiu, Zhanlong; Bertoldi, Katia

    2015-06-01

    Numerically simulating deformations in thin elastic sheets is a challenging problem in computational mechanics due to destabilizing compressive stresses that result in wrinkling. Determining the location, structure, and evolution of wrinkles in these problems has important implications in design and is an area of increasing interest in the fields of physics and engineering. In this work, several numerical approaches previously proposed to model equilibrium deformations in thin elastic sheets are compared. These include standard finite element-based static post-buckling approaches as well as a recently proposed method based on dynamic relaxation, which are applied to the problem of an annular sheet with opposed tractions where wrinkling is a key feature. Numerical solutions are compared to analytic predictions of the ground state, enabling a quantitative evaluation of the predictive power of the various methods. Results indicate that static finite element approaches produce local minima that are highly sensitive to initial imperfections, relying on a priori knowledge of the equilibrium wrinkling pattern to generate optimal results. In contrast, dynamic relaxation is much less sensitive to initial imperfections and can generate low-energy solutions for a wide variety of loading conditions without requiring knowledge of the equilibrium solution beforehand.

  15. Numerical studies of film formation in context of steel coating

    NASA Astrophysics Data System (ADS)

    Aniszewski, Wojciech; Zaleski, Stephane; Popinet, Stephane

    2017-11-01

    In this work, we present a detailed example of numerical study of film formation in the context of metal coating. Liquid metal is drawn from a reservoir onto a retracting solid sheet, forming a coating film characterized by phenomena such as longitudinal thickness variation (in 3D) or waves akin to that predicted by Kapitza and Kapitza (visible in two dimensions as well). While the industry standard configuration for Zinc coating is marked by coexistence of medium Capillary number (Ca = 0.03) and film Reynolds number above 1000, we present also parametric studies in order to establish more clearly to what degree does the numerical method influence film regimes obtained in the target configuration. The simulations have been performed using Basilisk, a grid-adapting, strongly optimized code derived from Gerris . Mesh adaptation allows for arbitrary precision in relevant regions such as the contact line or the meniscus, while a coarse grid is applied elsewhere. This adaptation strategy, as the results indicate, is the only realistic approach for numerical method to cover the wide range of necessary scales from the predicted film thickness (hundreds of microns) to the domain size (meters).

  16. Concept and numerical simulations of a reactive anti-fragment armour layer

    NASA Astrophysics Data System (ADS)

    Hušek, Martin; Kala, Jiří; Král, Petr; Hokeš, Filip

    2017-07-01

    The contribution describes the concept and numerical simulation of a ballistic protective layer which is able to actively resist projectiles or smaller colliding fragments flying at high speed. The principle of the layer was designed on the basis of the action/reaction system of reactive armour which is used for the protection of armoured vehicles. As the designed ballistic layer consists of steel plates simultaneously combined with explosive material - primary explosive and secondary explosive - the technique of coupling the Finite Element Method with Smoothed Particle Hydrodynamics was used for the simulations. Certain standard situations which the ballistic layer should resist were simulated. The contribution describes the principles for the successful execution of numerical simulations, their results, and an evaluation of the functionality of the ballistic layer.

  17. A numerical and experimental comparison of human head phantoms for compliance testing of mobile telephone equipment.

    PubMed

    Christ, Andreas; Chavannes, Nicolas; Nikoloski, Neviana; Gerber, Hans-Ulrich; Poković, Katja; Kuster, Niels

    2005-02-01

    A new human head phantom has been proposed by CENELEC/IEEE, based on a large scale anthropometric survey. This phantom is compared to a homogeneous Generic Head Phantom and three high resolution anatomical head models with respect to specific absorption rate (SAR) assessment. The head phantoms are exposed to the radiation of a generic mobile phone (GMP) with different antenna types and a commercial mobile phone. The phones are placed in the standardized testing positions and operate at 900 and 1800 MHz. The average peak SAR is evaluated using both experimental (DASY3 near field scanner) and numerical (FDTD simulations) techniques. The numerical and experimental results compare well and confirm that the applied SAR assessment methods constitute a conservative approach.

  18. Atmospheric turbulence profiling with unknown power spectral density

    NASA Astrophysics Data System (ADS)

    Helin, Tapio; Kindermann, Stefan; Lehtonen, Jonatan; Ramlau, Ronny

    2018-04-01

    Adaptive optics (AO) is a technology in modern ground-based optical telescopes to compensate for the wavefront distortions caused by atmospheric turbulence. One method that allows to retrieve information about the atmosphere from telescope data is so-called SLODAR, where the atmospheric turbulence profile is estimated based on correlation data of Shack-Hartmann wavefront measurements. This approach relies on a layered Kolmogorov turbulence model. In this article, we propose a novel extension of the SLODAR concept by including a general non-Kolmogorov turbulence layer close to the ground with an unknown power spectral density. We prove that the joint estimation problem of the turbulence profile above ground simultaneously with the unknown power spectral density at the ground is ill-posed and propose three numerical reconstruction methods. We demonstrate by numerical simulations that our methods lead to substantial improvements in the turbulence profile reconstruction compared to the standard SLODAR-type approach. Also, our methods can accurately locate local perturbations in non-Kolmogorov power spectral densities.

  19. Numerical simulations of incompressible laminar flows using viscous-inviscid interaction procedures

    NASA Astrophysics Data System (ADS)

    Shatalov, Alexander V.

    The present method is based on Helmholtz velocity decomposition where velocity is written as a sum of irrotational (gradient of a potential) and rotational (correction due to vorticity) components. Substitution of the velocity decomposition into the continuity equation yields an equation for the potential, while substitution into the momentum equations yields equations for the velocity corrections. A continuation approach is used to relate the pressure to the gradient of the potential through a modified Bernoulli's law, which allows the elimination of the pressure variable from the momentum equations. The present work considers steady and unsteady two-dimensional incompressible flows over an infinite cylinder and NACA 0012 airfoil shape. The numerical results are compared against standard methods (stream function-vorticity and SMAC methods) and data available in literature. The results demonstrate that the proposed formulation leads to a good approximation with some possible benefits compared to the available formulations. The method is not restricted to two-dimensional flows and can be used for viscous-inviscid domain decomposition calculations.

  20. Efficient implicit LES method for the simulation of turbulent cavitating flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egerer, Christian P., E-mail: christian.egerer@aer.mw.tum.de; Schmidt, Steffen J.; Hickel, Stefan

    2016-07-01

    We present a numerical method for efficient large-eddy simulation of compressible liquid flows with cavitation based on an implicit subgrid-scale model. Phase change and subgrid-scale interface structures are modeled by a homogeneous mixture model that assumes local thermodynamic equilibrium. Unlike previous approaches, emphasis is placed on operating on a small stencil (at most four cells). The truncation error of the discretization is designed to function as a physically consistent subgrid-scale model for turbulence. We formulate a sensor functional that detects shock waves or pseudo-phase boundaries within the homogeneous mixture model for localizing numerical dissipation. In smooth regions of the flowmore » field, a formally non-dissipative central discretization scheme is used in combination with a regularization term to model the effect of unresolved subgrid scales. The new method is validated by computing standard single- and two-phase test-cases. Comparison of results for a turbulent cavitating mixing layer obtained with the new method demonstrates its suitability for the target applications.« less

  1. A particle finite element method for machining simulations

    NASA Astrophysics Data System (ADS)

    Sabel, Matthias; Sator, Christian; Müller, Ralf

    2014-07-01

    The particle finite element method (PFEM) appears to be a convenient technique for machining simulations, since the geometry and topology of the problem can undergo severe changes. In this work, a short outline of the PFEM-algorithm is given, which is followed by a detailed description of the involved operations. The -shape method, which is used to track the topology, is explained and tested by a simple example. Also the kinematics and a suitable finite element formulation are introduced. To validate the method simple settings without topological changes are considered and compared to the standard finite element method for large deformations. To examine the performance of the method, when dealing with separating material, a tensile loading is applied to a notched plate. This investigation includes a numerical analysis of the different meshing parameters, and the numerical convergence is studied. With regard to the cutting simulation it is found that only a sufficiently large number of particles (and thus a rather fine finite element discretisation) leads to converged results of process parameters, such as the cutting force.

  2. Time-variant random interval natural frequency analysis of structures

    NASA Astrophysics Data System (ADS)

    Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin

    2018-02-01

    This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.

  3. Solution of nonlinear time-dependent PDEs through componentwise approximation of matrix functions

    NASA Astrophysics Data System (ADS)

    Cibotarica, Alexandru; Lambers, James V.; Palchak, Elisabeth M.

    2016-09-01

    Exponential propagation iterative (EPI) methods provide an efficient approach to the solution of large stiff systems of ODEs, compared to standard integrators. However, the bulk of the computational effort in these methods is due to products of matrix functions and vectors, which can become very costly at high resolution due to an increase in the number of Krylov projection steps needed to maintain accuracy. In this paper, it is proposed to modify EPI methods by using Krylov subspace spectral (KSS) methods, instead of standard Krylov projection methods, to compute products of matrix functions and vectors. Numerical experiments demonstrate that this modification causes the number of Krylov projection steps to become bounded independently of the grid size, thus dramatically improving efficiency and scalability. As a result, for each test problem featured, as the total number of grid points increases, the growth in computation time is just below linear, while other methods achieved this only on selected test problems or not at all.

  4. Finite element techniques in computational time series analysis of turbulent flows

    NASA Astrophysics Data System (ADS)

    Horenko, I.

    2009-04-01

    In recent years there has been considerable increase of interest in the mathematical modeling and analysis of complex systems that undergo transitions between several phases or regimes. Such systems can be found, e.g., in weather forecast (transitions between weather conditions), climate research (ice and warm ages), computational drug design (conformational transitions) and in econometrics (e.g., transitions between different phases of the market). In all cases, the accumulation of sufficiently detailed time series has led to the formation of huge databases, containing enormous but still undiscovered treasures of information. However, the extraction of essential dynamics and identification of the phases is usually hindered by the multidimensional nature of the signal, i.e., the information is "hidden" in the time series. The standard filtering approaches (like f.~e. wavelets-based spectral methods) have in general unfeasible numerical complexity in high-dimensions, other standard methods (like f.~e. Kalman-filter, MVAR, ARCH/GARCH etc.) impose some strong assumptions about the type of the underlying dynamics. Approach based on optimization of the specially constructed regularized functional (describing the quality of data description in terms of the certain amount of specified models) will be introduced. Based on this approach, several new adaptive mathematical methods for simultaneous EOF/SSA-like data-based dimension reduction and identification of hidden phases in high-dimensional time series will be presented. The methods exploit the topological structure of the analysed data an do not impose severe assumptions on the underlying dynamics. Special emphasis will be done on the mathematical assumptions and numerical cost of the constructed methods. The application of the presented methods will be first demonstrated on a toy example and the results will be compared with the ones obtained by standard approaches. The importance of accounting for the mathematical assumptions used in the analysis will be pointed up in this example. Finally, applications to analysis of meteorological and climate data will be presented.

  5. Spurious Behavior of Shock-Capturing Methods: Problems Containing Stiff Source Terms and Discontinuities

    NASA Technical Reports Server (NTRS)

    Yee, Helen M. C.; Kotov, D. V.; Wang, Wei; Shu, Chi-Wang

    2013-01-01

    The goal of this paper is to relate numerical dissipations that are inherited in high order shock-capturing schemes with the onset of wrong propagation speed of discontinuities. For pointwise evaluation of the source term, previous studies indicated that the phenomenon of wrong propagation speed of discontinuities is connected with the smearing of the discontinuity caused by the discretization of the advection term. The smearing introduces a nonequilibrium state into the calculation. Thus as soon as a nonequilibrium value is introduced in this manner, the source term turns on and immediately restores equilibrium, while at the same time shifting the discontinuity to a cell boundary. The present study is to show that the degree of wrong propagation speed of discontinuities is highly dependent on the accuracy of the numerical method. The manner in which the smearing of discontinuities is contained by the numerical method and the overall amount of numerical dissipation being employed play major roles. Moreover, employing finite time steps and grid spacings that are below the standard Courant-Friedrich-Levy (CFL) limit on shockcapturing methods for compressible Euler and Navier-Stokes equations containing stiff reacting source terms and discontinuities reveals surprising counter-intuitive results. Unlike non-reacting flows, for stiff reactions with discontinuities, employing a time step and grid spacing that are below the CFL limit (based on the homogeneous part or non-reacting part of the governing equations) does not guarantee a correct solution of the chosen governing equations. Instead, depending on the numerical method, time step and grid spacing, the numerical simulation may lead to (a) the correct solution (within the truncation error of the scheme), (b) a divergent solution, (c) a wrong propagation speed of discontinuities solution or (d) other spurious solutions that are solutions of the discretized counterparts but are not solutions of the governing equations. The present investigation for three very different stiff system cases confirms some of the findings of Lafon & Yee (1996) and LeVeque & Yee (1990) for a model scalar PDE. The findings might shed some light on the reported difficulties in numerical combustion and problems with stiff nonlinear (homogeneous) source terms and discontinuities in general.

  6. COHERENT NETWORK ANALYSIS FOR CONTINUOUS GRAVITATIONAL WAVE SIGNALS IN A PULSAR TIMING ARRAY: PULSAR PHASES AS EXTRINSIC PARAMETERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yan; Mohanty, Soumya D.; Jenet, Fredrick A., E-mail: ywang12@hust.edu.cn

    2015-12-20

    Supermassive black hole binaries are one of the primary targets of gravitational wave (GW) searches using pulsar timing arrays (PTAs). GW signals from such systems are well represented by parameterized models, allowing the standard Generalized Likelihood Ratio Test (GLRT) to be used for their detection and estimation. However, there is a dichotomy in how the GLRT can be implemented for PTAs: there are two possible ways in which one can split the set of signal parameters for semi-analytical and numerical extremization. The straightforward extension of the method used for continuous signals in ground-based GW searches, where the so-called pulsar phasemore » parameters are maximized numerically, was addressed in an earlier paper. In this paper, we report the first study of the performance of the second approach where the pulsar phases are maximized semi-analytically. This approach is scalable since the number of parameters left over for numerical optimization does not depend on the size of the PTA. Our results show that for the same array size (9 pulsars), the new method performs somewhat worse in parameter estimation, but not in detection, than the previous method where the pulsar phases were maximized numerically. The origin of the performance discrepancy is likely to be in the ill-posedness that is intrinsic to any network analysis method. However, the scalability of the new method allows the ill-posedness to be mitigated by simply adding more pulsars to the array. This is shown explicitly by taking a larger array of pulsars.« less

  7. Scan-To Output Validation: Towards a Standardized Geometric Quality Assessment of Building Information Models Based on Point Clouds

    NASA Astrophysics Data System (ADS)

    Bonduel, M.; Bassier, M.; Vergauwen, M.; Pauwels, P.; Klein, R.

    2017-11-01

    The use of Building Information Modeling (BIM) for existing buildings based on point clouds is increasing. Standardized geometric quality assessment of the BIMs is needed to make them more reliable and thus reusable for future users. First, available literature on the subject is studied. Next, an initial proposal for a standardized geometric quality assessment is presented. Finally, this method is tested and evaluated with a case study. The number of specifications on BIM relating to existing buildings is limited. The Levels of Accuracy (LOA) specification of the USIBD provides definitions and suggestions regarding geometric model accuracy, but lacks a standardized assessment method. A deviation analysis is found to be dependent on (1) the used mathematical model, (2) the density of the point clouds and (3) the order of comparison. Results of the analysis can be graphical and numerical. An analysis on macro (building) and micro (BIM object) scale is necessary. On macro scale, the complete model is compared to the original point cloud and vice versa to get an overview of the general model quality. The graphical results show occluded zones and non-modeled objects respectively. Colored point clouds are derived from this analysis and integrated in the BIM. On micro scale, the relevant surface parts are extracted per BIM object and compared to the complete point cloud. Occluded zones are extracted based on a maximum deviation. What remains is classified according to the LOA specification. The numerical results are integrated in the BIM with the use of object parameters.

  8. Simulating the electrohydrodynamics of a viscous droplet

    NASA Astrophysics Data System (ADS)

    Theillard, Maxime; Saintillan, David

    2016-11-01

    We present a novel numerical approach for the simulation of viscous drop placed in an electric field in two and three spatial dimensions. Our method is constructed as a stable projection method on Quad/Octree grids. Using a modified pressure correction we were able to alleviate the standard time step restriction incurred by capillary forces. In weak electric fields, our results match remarkably well with the predictions from the Taylor-Melcher leaky dielectric model. In strong electric fields the so-called Quincke rotation is correctly reproduced.

  9. Research on numerical method for multiple pollution source discharge and optimal reduction program

    NASA Astrophysics Data System (ADS)

    Li, Mingchang; Dai, Mingxin; Zhou, Bin; Zou, Bin

    2018-03-01

    In this paper, the optimal method for reduction program is proposed by the nonlinear optimal algorithms named that genetic algorithm. The four main rivers in Jiangsu province, China are selected for reducing the environmental pollution in nearshore district. Dissolved inorganic nitrogen (DIN) is studied as the only pollutant. The environmental status and standard in the nearshore district is used to reduce the discharge of multiple river pollutant. The research results of reduction program are the basis of marine environmental management.

  10. Numerical solution of a spatio-temporal gender-structured model for hantavirus infection in rodents.

    PubMed

    Bürger, Raimund; Chowell, Gerardo; Gavilán, Elvis; Mulet, Pep; Villada, Luis M

    2018-02-01

    In this article we describe the transmission dynamics of hantavirus in rodents using a spatio-temporal susceptible-exposed-infective-recovered (SEIR) compartmental model that distinguishes between male and female subpopulations [L.J.S. Allen, R.K. McCormack and C.B. Jonsson, Bull. Math. Biol. 68 (2006), 511--524]. Both subpopulations are assumed to differ in their movement with respect to local variations in the densities of their own and the opposite gender group. Three alternative models for the movement of the male individuals are examined. In some cases the movement is not only directed by the gradient of a density (as in the standard diffusive case), but also by a non-local convolution of density values as proposed, in another context, in [R.M. Colombo and E. Rossi, Commun. Math. Sci., 13 (2015), 369--400]. An efficient numerical method for the resulting convection-diffusion-reaction system of partial differential equations is proposed. This method involves techniques of weighted essentially non-oscillatory (WENO) reconstructions in combination with implicit-explicit Runge-Kutta (IMEX-RK) methods for time stepping. The numerical results demonstrate significant differences in the spatio-temporal behavior predicted by the different models, which suggest future research directions.

  11. Semiclassical evaluation of quantum fidelity

    NASA Astrophysics Data System (ADS)

    Vanicek, Jiri

    2004-03-01

    We present a numerically feasible semiclassical method to evaluate quantum fidelity (Loschmidt echo) in a classically chaotic system. It was thought that such evaluation would be intractable, but instead we show that a uniform semiclassical expression not only is tractable but it gives remarkably accurate numerical results for the standard map in both the Fermi-golden-rule and Lyapunov regimes. Because it allows a Monte-Carlo evaluation, this uniform expression is accurate at times where there are 10^70 semiclassical contributions. Remarkably, the method also explicitly contains the ``building blocks'' of analytical theories of recent literature, and thus permits a direct test of approximations made by other authors in these regimes, rather than an a posteriori comparison with numerical results. We explain in more detail the extended validity of the classical perturbation approximation and thus provide a ``defense" of the linear response theory from the famous Van Kampen objection. We point out the potential use of our uniform expression in other areas because it gives a most direct link between the quantum Feynman propagator based on the path integral and the semiclassical Van Vleck propagator based on the sum over classical trajectories. Finally, we test the applicability of our method in integrable and mixed systems.

  12. Analysis of High Order Difference Methods for Multiscale Complex Compressible Flows

    NASA Technical Reports Server (NTRS)

    Sjoegreen, Bjoern; Yee, H. C.; Tang, Harry (Technical Monitor)

    2002-01-01

    Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes with incremental studies was initiated. Here we further refine the analysis on, and improve the understanding of the adaptive numerical dissipation control strategy. Basically, the development of these schemes focuses on high order nondissipative schemes and takes advantage of the progress that has been made for the last 30 years in numerical methods for conservation laws, such as techniques for imposing boundary conditions, techniques for stability at shock waves, and techniques for stable and accurate long-time integration. We concentrate on high order centered spatial discretizations and a fourth-order Runge-Kutta temporal discretizations as the base scheme. Near the bound-aries, the base scheme has stable boundary difference operators. To further enhance stability, the split form of the inviscid flux derivatives is frequently used for smooth flow problems. To enhance nonlinear stability, linear high order numerical dissipations are employed away from discontinuities, and nonlinear filters are employed after each time step in order to suppress spurious oscillations near discontinuities to minimize the smearing of turbulent fluctuations. Although these schemes are built from many components, each of which is well-known, it is not entirely obvious how the different components be best connected. For example, the nonlinear filter could instead have been built into the spatial discretization, so that it would have been activated at each stage in the Runge-Kutta time stepping. We could think of a mechanism that activates the split form of the equations only at some parts of the domain. Another issue is how to define good sensors for determining in which parts of the computational domain a certain feature should be filtered by the appropriate numerical dissipation. For the present study we employ a wavelet technique introduced in as sensors. Here, the method is briefly described with selected numerical experiments.

  13. Geometrically derived difference formulae for the numerical integration of trajectory problems

    NASA Technical Reports Server (NTRS)

    Mcleod, R. J. Y.; Sanz-Serna, J. M.

    1981-01-01

    The term 'trajectory problem' is taken to include problems that can arise, for instance, in connection with contour plotting, or in the application of continuation methods, or during phase-plane analysis. Geometrical techniques are used to construct difference methods for these problems to produce in turn explicit and implicit circularly exact formulae. Based on these formulae, a predictor-corrector method is derived which, when compared with a closely related standard method, shows improved performance. It is found that this latter method produces spurious limit cycles, and this behavior is partly analyzed. Finally, a simple variable-step algorithm is constructed and tested.

  14. Optimal control design of turbo spin‐echo sequences with applications to parallel‐transmit systems

    PubMed Central

    Hoogduin, Hans; Hajnal, Joseph V.; van den Berg, Cornelis A. T.; Luijten, Peter R.; Malik, Shaihan J.

    2016-01-01

    Purpose The design of turbo spin‐echo sequences is modeled as a dynamic optimization problem which includes the case of inhomogeneous transmit radiofrequency fields. This problem is efficiently solved by optimal control techniques making it possible to design patient‐specific sequences online. Theory and Methods The extended phase graph formalism is employed to model the signal evolution. The design problem is cast as an optimal control problem and an efficient numerical procedure for its solution is given. The numerical and experimental tests address standard multiecho sequences and pTx configurations. Results Standard, analytically derived flip angle trains are recovered by the numerical optimal control approach. New sequences are designed where constraints on radiofrequency total and peak power are included. In the case of parallel transmit application, the method is able to calculate the optimal echo train for two‐dimensional and three‐dimensional turbo spin echo sequences in the order of 10 s with a single central processing unit (CPU) implementation. The image contrast is maintained through the whole field of view despite inhomogeneities of the radiofrequency fields. Conclusion The optimal control design sheds new light on the sequence design process and makes it possible to design sequences in an online, patient‐specific fashion. Magn Reson Med 77:361–373, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine PMID:26800383

  15. Numerical Method for Darcy Flow Derived Using Discrete Exterior Calculus

    NASA Astrophysics Data System (ADS)

    Hirani, A. N.; Nakshatrala, K. B.; Chaudhry, J. H.

    2015-05-01

    We derive a numerical method for Darcy flow, and also for Poisson's equation in mixed (first order) form, based on discrete exterior calculus (DEC). Exterior calculus is a generalization of vector calculus to smooth manifolds and DEC is one of its discretizations on simplicial complexes such as triangle and tetrahedral meshes. DEC is a coordinate invariant discretization, in that it does not depend on the embedding of the simplices or the whole mesh. We start by rewriting the governing equations of Darcy flow using the language of exterior calculus. This yields a formulation in terms of flux differential form and pressure. The numerical method is then derived by using the framework provided by DEC for discretizing differential forms and operators that act on forms. We also develop a discretization for a spatially dependent Hodge star that varies with the permeability of the medium. This also allows us to address discontinuous permeability. The matrix representation for our discrete non-homogeneous Hodge star is diagonal, with positive diagonal entries. The resulting linear system of equations for flux and pressure are saddle type, with a diagonal matrix as the top left block. The performance of the proposed numerical method is illustrated on many standard test problems. These include patch tests in two and three dimensions, comparison with analytically known solutions in two dimensions, layered medium with alternating permeability values, and a test with a change in permeability along the flow direction. We also show numerical evidence of convergence of the flux and the pressure. A convergence experiment is included for Darcy flow on a surface. A short introduction to the relevant parts of smooth and discrete exterior calculus is included in this article. We also include a discussion of the boundary condition in terms of exterior calculus.

  16. Response of a Rotating Propeller to Aerodynamic Excitation

    NASA Technical Reports Server (NTRS)

    Arnoldi, Walter E.

    1949-01-01

    The flexural vibration of a rotating propeller blade with clamped shank is analyzed with the object of presenting, in matrix form, equations for the elastic bending moments in forced vibration resulting from aerodynamic forces applied at a fixed multiple of rotational speed. Matrix equations are also derived which define the critical speeds end mode shapes for any excitation order and the relation between critical speed and blade angle. Reference is given to standard works on the numerical solution of matrix equations of the forms derived. The use of a segmented blade as an approximation to a continuous blade provides a simple means for obtaining the matrix solution from the integral equation of equilibrium, so that, in the numerical application of the method presented, the several matrix arrays of the basic physical characteristics of the propeller blade are of simple form, end their simplicity is preserved until, with the solution in sight, numerical manipulations well-known in matrix algebra yield the desired critical speeds and mode shapes frame which the vibration at any operating condition may be synthesized. A close correspondence between the familiar Stodola method and the matrix method is pointed out, indicating that any features of novelty are characteristic not of the analytical procedure but only of the abbreviation, condensation, and efficient organization of the numerical procedure made possible by the use of classical matrix theory.

  17. A symplectic integration method for elastic filaments

    NASA Astrophysics Data System (ADS)

    Ladd, Tony; Misra, Gaurav

    2009-03-01

    Elastic rods are a ubiquitous coarse-grained model of semi-flexible biopolymers such as DNA, actin, and microtubules. The Worm-Like Chain (WLC) is the standard numerical model for semi-flexible polymers, but it is only a linearized approximation to the dynamics of an elastic rod, valid for small deflections; typically the torsional motion is neglected as well. In the standard finite-difference and finite-element formulations of an elastic rod, the continuum equations of motion are discretized in space and time, but it is then difficult to ensure that the Hamiltonian structure of the exact equations is preserved. Here we discretize the Hamiltonian itself, expressed as a line integral over the contour of the filament. This discrete representation of the continuum filament can then be integrated by one of the explicit symplectic integrators frequently used in molecular dynamics. The model systematically approximates the continuum partial differential equations, but has the same level of computational complexity as molecular dynamics and is constraint free. Numerical tests show that the algorithm is much more stable than a finite-difference formulation and can be used for high aspect ratio filaments, such as actin. We present numerical results for the deterministic and stochastic motion of single filaments.

  18. On Spurious Numerics in Solving Reactive Equations

    NASA Technical Reports Server (NTRS)

    Kotov, D. V; Yee, H. C.; Wang, W.; Shu, C.-W.

    2013-01-01

    The objective of this study is to gain a deeper understanding of the behavior of high order shock-capturing schemes for problems with stiff source terms and discontinuities and on corresponding numerical prediction strategies. The studies by Yee et al. (2012) and Wang et al. (2012) focus only on solving the reactive system by the fractional step method using the Strang splitting (Strang 1968). It is a common practice by developers in computational physics and engineering simulations to include a cut off safeguard if densities are outside the permissible range. Here we compare the spurious behavior of the same schemes by solving the fully coupled reactive system without the Strang splitting vs. using the Strang splitting. Comparison between the two procedures and the effects of a cut off safeguard is the focus the present study. The comparison of the performance of these schemes is largely based on the degree to which each method captures the correct location of the reaction front for coarse grids. Here "coarse grids" means standard mesh density requirement for accurate simulation of typical non-reacting flows of similar problem setup. It is remarked that, in order to resolve the sharp reaction front, local refinement beyond standard mesh density is still needed.

  19. Computational study: The influence of omni-directional guide vane on the flow pattern characteristic around Savonius wind turbine

    NASA Astrophysics Data System (ADS)

    Wicaksono, Yoga Arob; Tjahjana, D. D. D. P.

    2017-01-01

    Standart Savonius wind turbine have a low performance such as low coefficient of power and low coefficient of torque compared with another type of wind turbine. This phenomenon occurs because the wind stream can cause the negative pressure at the returning rotor. To solve this problem, standard Savonius combined with Omni Directional Guide Vane (ODGV) proposed. The aim of this research is to study the influence of ODGV on the flow pattern characteristic around of Savonius wind turbine. The numerical model is based on the Navier-Stokes equations with the standard k-ɛ turbulent model. This equation solved by a finite volume discretization method. This case was analyzed by commercial computational fluid dynamics solver such as SolidWorks Flow Simulations. Simulations were performed at the different wind directions; there are 0°, 30°,60° at 4 m/s wind speed. The numerical method validated with the past experimental data. The result indicated that the ODGV able to augment air flow to advancing rotor and decrease the negative pressure in the upstream of returning rotor compared to the bare Savonius wind turbine.

  20. The evolution of hyperboloidal data with the dual foliation formalism: mathematical analysis and wave equation tests

    NASA Astrophysics Data System (ADS)

    Hilditch, David; Harms, Enno; Bugner, Marcus; Rüter, Hannes; Brügmann, Bernd

    2018-03-01

    A long-standing problem in numerical relativity is the satisfactory treatment of future null-infinity. We propose an approach for the evolution of hyperboloidal initial data in which the outer boundary of the computational domain is placed at infinity. The main idea is to apply the ‘dual foliation’ formalism in combination with hyperboloidal coordinates and the generalized harmonic gauge formulation. The strength of the present approach is that, following the ideas of Zenginoğlu, a hyperboloidal layer can be naturally attached to a central region using standard coordinates of numerical relativity applications. Employing a generalization of the standard hyperboloidal slices, developed by Calabrese et al, we find that all formally singular terms take a trivial limit as we head to null-infinity. A byproduct is a numerical approach for hyperboloidal evolution of nonlinear wave equations violating the null-condition. The height-function method, used often for fixed background spacetimes, is generalized in such a way that the slices can be dynamically ‘waggled’ to maintain the desired outgoing coordinate lightspeed precisely. This is achieved by dynamically solving the eikonal equation. As a first numerical test of the new approach we solve the 3D flat space scalar wave equation. The simulations, performed with the pseudospectral bamps code, show that outgoing waves are cleanly absorbed at null-infinity and that errors converge away rapidly as resolution is increased.

  1. F--Ray: A new algorithm for efficient transport of ionizing radiation

    NASA Astrophysics Data System (ADS)

    Mao, Yi; Zhang, J.; Wandelt, B. D.; Shapiro, P. R.; Iliev, I. T.

    2014-04-01

    We present a new algorithm for the 3D transport of ionizing radiation, called F2-Ray (Fast Fourier Ray-tracing method). The transfer of ionizing radiation with long mean free path in diffuse intergalactic gas poses a special challenge to standard numerical methods which transport the radiation in position space. Standard methods usually trace each individual ray until it is fully absorbed by the intervening gas. If the mean free path is long, the computational cost and memory load are likely to be prohibitive. We have developed an algorithm that overcomes these limitations and is, therefore, significantly more efficient. The method calculates the transfer of radiation collectively, using the Fast Fourier Transform to convert radiation between position and Fourier spaces, so the computational cost will not increase with the number of ionizing sources. The method also automatically combines parallel rays with the same frequency at the same grid cell, thereby minimizing the memory requirement. The method is explicitly photon-conserving, i.e. the depletion of ionizing photons is guaranteed to equal the photoionizations they caused, and explicitly obeys the periodic boundary condition, i.e. the escape of ionizing photons from one side of a simulation volume is guaranteed to be compensated by emitting the same amount of photons into the volume through the opposite side. Together, these features make it possible to numerically simulate the transfer of ionizing photons more efficiently than previous methods. Since ionizing radiation such as the X-ray is responsible for heating the intergalactic gas when first stars and quasars form at high redshifts, our method can be applied to simulate thermal distribution, in addition to cosmic reionization, in three-dimensional inhomogeneous cosmological density field.

  2. Feasibility study for a numerical aerodynamic simulation facility. Volume 2: Hardware specifications/descriptions

    NASA Technical Reports Server (NTRS)

    Green, F. M.; Resnick, D. R.

    1979-01-01

    An FMP (Flow Model Processor) was designed for use in the Numerical Aerodynamic Simulation Facility (NASF). The NASF was developed to simulate fluid flow over three-dimensional bodies in wind tunnel environments and in free space. The facility is applicable to studying aerodynamic and aircraft body designs. The following general topics are discussed in this volume: (1) FMP functional computer specifications; (2) FMP instruction specification; (3) standard product system components; (4) loosely coupled network (LCN) specifications/description; and (5) three appendices: performance of trunk allocation contention elimination (trace) method, LCN channel protocol and proposed LCN unified second level protocol.

  3. Curvilinear grids for WENO methods in astrophysical simulations

    NASA Astrophysics Data System (ADS)

    Grimm-Strele, H.; Kupka, F.; Muthsam, H. J.

    2014-03-01

    We investigate the applicability of curvilinear grids in the context of astrophysical simulations and WENO schemes. With the non-smooth mapping functions from Calhoun et al. (2008), we can tackle many astrophysical problems which were out of scope with the standard grids in numerical astrophysics. We describe the difficulties occurring when implementing curvilinear coordinates into our WENO code, and how we overcome them. We illustrate the theoretical results with numerical data. The WENO finite difference scheme works only for high Mach number flows and smooth mapping functions, whereas the finite volume scheme gives accurate results even for low Mach number flows and on non-smooth grids.

  4. Application of micropolar plasticity to post failure analysis in geomechanics

    NASA Astrophysics Data System (ADS)

    Manzari, Majid T.

    2004-08-01

    A micropolar elastoplastic model for soils is formulated and a series of finite element analyses are employed to demonstrate the use of a micropolar continuum in overcoming the numerical difficulties encountered in application of finite element method in standard Cauchy-Boltzmann continuum. Three examples of failure analysis involving a deep excavation, shallow foundation, and a retaining wall are presented. In all these cases, it is observed that the length scale introduced in the polar continuum regularizes the incremental boundary value problem and allows the numerical simulation to be continued until a clear collapse mechanism is achieved. The issue of grain size effect is also discussed. Copyright

  5. An accurate and efficient acoustic eigensolver based on a fast multipole BEM and a contour integral method

    NASA Astrophysics Data System (ADS)

    Zheng, Chang-Jun; Gao, Hai-Feng; Du, Lei; Chen, Hai-Bo; Zhang, Chuanzeng

    2016-01-01

    An accurate numerical solver is developed in this paper for eigenproblems governed by the Helmholtz equation and formulated through the boundary element method. A contour integral method is used to convert the nonlinear eigenproblem into an ordinary eigenproblem, so that eigenvalues can be extracted accurately by solving a set of standard boundary element systems of equations. In order to accelerate the solution procedure, the parameters affecting the accuracy and efficiency of the method are studied and two contour paths are compared. Moreover, a wideband fast multipole method is implemented with a block IDR (s) solver to reduce the overall solution cost of the boundary element systems of equations with multiple right-hand sides. The Burton-Miller formulation is employed to identify the fictitious eigenfrequencies of the interior acoustic problems with multiply connected domains. The actual effect of the Burton-Miller formulation on tackling the fictitious eigenfrequency problem is investigated and the optimal choice of the coupling parameter as α = i / k is confirmed through exterior sphere examples. Furthermore, the numerical eigenvalues obtained by the developed method are compared with the results obtained by the finite element method to show the accuracy and efficiency of the developed method.

  6. A new implementation of the CMRH method for solving dense linear systems

    NASA Astrophysics Data System (ADS)

    Heyouni, M.; Sadok, H.

    2008-04-01

    The CMRH method [H. Sadok, Methodes de projections pour les systemes lineaires et non lineaires, Habilitation thesis, University of Lille1, Lille, France, 1994; H. Sadok, CMRH: A new method for solving nonsymmetric linear systems based on the Hessenberg reduction algorithm, Numer. Algorithms 20 (1999) 303-321] is an algorithm for solving nonsymmetric linear systems in which the Arnoldi component of GMRES is replaced by the Hessenberg process, which generates Krylov basis vectors which are orthogonal to standard unit basis vectors rather than mutually orthogonal. The iterate is formed from these vectors by solving a small least squares problem involving a Hessenberg matrix. Like GMRES, this method requires one matrix-vector product per iteration. However, it can be implemented to require half as much arithmetic work and less storage. Moreover, numerical experiments show that this method performs accurately and reduces the residual about as fast as GMRES. With this new implementation, we show that the CMRH method is the only method with long-term recurrence which requires not storing at the same time the entire Krylov vectors basis and the original matrix as in the GMRES algorithmE A comparison with Gaussian elimination is provided.

  7. New method of processing heat treatment experiments with numerical simulation support

    NASA Astrophysics Data System (ADS)

    Kik, T.; Moravec, J.; Novakova, I.

    2017-08-01

    In this work, benefits of combining modern software for numerical simulations of welding processes with laboratory research was described. Proposed new method of processing heat treatment experiments leading to obtaining relevant input data for numerical simulations of heat treatment of large parts was presented. It is now possible, by using experiments on small tested samples, to simulate cooling conditions comparable with cooling of bigger parts. Results from this method of testing makes current boundary conditions during real cooling process more accurate, but also can be used for improvement of software databases and optimization of a computational models. The point is to precise the computation of temperature fields for large scale hardening parts based on new method of temperature dependence determination of the heat transfer coefficient into hardening media for the particular material, defined maximal thickness of processed part and cooling conditions. In the paper we will also present an example of the comparison standard and modified (according to newly suggested methodology) heat transfer coefficient data’s and theirs influence on the simulation results. It shows how even the small changes influence mainly on distribution of temperature, metallurgical phases, hardness and stresses distribution. By this experiment it is also possible to obtain not only input data and data enabling optimization of computational model but at the same time also verification data. The greatest advantage of described method is independence of used cooling media type.

  8. Standardization in laboratory medicine: Adoption of common reference intervals to the Croatian population.

    PubMed

    Flegar-Meštrić, Zlata; Perkov, Sonja; Radeljak, Andrea

    2016-03-26

    Considering the fact that the results of laboratory tests provide useful information about the state of health of patients, determination of reference value is considered an intrinsic part in the development of laboratory medicine. There are still huge differences in the analytical methods used as well as in the associated reference intervals which could consequently significantly affect the proper assessment of patient health. In a constant effort to increase the quality of patients' care, there are numerous international initiatives for standardization and/or harmonization of laboratory diagnostics in order to achieve maximum comparability of laboratory test results and improve patient safety. Through the standardization and harmonization processes of analytical methods the ability to create unique reference intervals is achieved. Such reference intervals could be applied globally in all laboratories using methods traceable to the same reference measuring system and analysing the biological samples from the populations with similar socio-demographic and ethnic characteristics. In this review we outlined the results of the harmonization processes in Croatia in the field of population based reference intervals for clinically relevant blood and serum constituents which are in accordance with ongoing activity for worldwide standardization and harmonization based on traceability in laboratory medicine.

  9. Standardization in laboratory medicine: Adoption of common reference intervals to the Croatian population

    PubMed Central

    Flegar-Meštrić, Zlata; Perkov, Sonja; Radeljak, Andrea

    2016-01-01

    Considering the fact that the results of laboratory tests provide useful information about the state of health of patients, determination of reference value is considered an intrinsic part in the development of laboratory medicine. There are still huge differences in the analytical methods used as well as in the associated reference intervals which could consequently significantly affect the proper assessment of patient health. In a constant effort to increase the quality of patients’ care, there are numerous international initiatives for standardization and/or harmonization of laboratory diagnostics in order to achieve maximum comparability of laboratory test results and improve patient safety. Through the standardization and harmonization processes of analytical methods the ability to create unique reference intervals is achieved. Such reference intervals could be applied globally in all laboratories using methods traceable to the same reference measuring system and analysing the biological samples from the populations with similar socio-demographic and ethnic characteristics. In this review we outlined the results of the harmonization processes in Croatia in the field of population based reference intervals for clinically relevant blood and serum constituents which are in accordance with ongoing activity for worldwide standardization and harmonization based on traceability in laboratory medicine. PMID:27019800

  10. A new exact method for line radiative transfer

    NASA Astrophysics Data System (ADS)

    Elitzur, Moshe; Asensio Ramos, Andrés

    2006-01-01

    We present a new method, the coupled escape probability (CEP), for exact calculation of line emission from multi-level systems, solving only algebraic equations for the level populations. The CEP formulation of the classical two-level problem is a set of linear equations, and we uncover an exact analytic expression for the emission from two-level optically thick sources that holds as long as they are in the `effectively thin' regime. In a comparative study of a number of standard problems, the CEP method outperformed the leading line transfer methods by substantial margins. The algebraic equations employed by our new method are already incorporated in numerous codes based on the escape probability approximation. All that is required for an exact solution with these existing codes is to augment the expression for the escape probability with simple zone-coupling terms. As an application, we find that standard escape probability calculations generally produce the correct cooling emission by the CII 158-μm line but not by the 3P lines of OI.

  11. Applications of Automation Methods for Nonlinear Fracture Test Analysis

    NASA Technical Reports Server (NTRS)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    Using automated and standardized computer tools to calculate the pertinent test result values has several advantages such as: 1. allowing high-fidelity solutions to complex nonlinear phenomena that would be impractical to express in written equation form, 2. eliminating errors associated with the interpretation and programing of analysis procedures from the text of test standards, 3. lessening the need for expertise in the areas of solid mechanics, fracture mechanics, numerical methods, and/or finite element modeling, to achieve sound results, 4. and providing one computer tool and/or one set of solutions for all users for a more "standardized" answer. In summary, this approach allows a non-expert with rudimentary training to get the best practical solution based on the latest understanding with minimum difficulty.Other existing ASTM standards that cover complicated phenomena use standard computer programs: 1. ASTM C1340/C1340M-10- Standard Practice for Estimation of Heat Gain or Loss Through Ceilings Under Attics Containing Radiant Barriers by Use of a Computer Program 2. ASTM F 2815 - Standard Practice for Chemical Permeation through Protective Clothing Materials: Testing Data Analysis by Use of a Computer Program 3. ASTM E2807 - Standard Specification for 3D Imaging Data Exchange, Version 1.0 The verification, validation, and round-robin processes required of a computer tool closely parallel the methods that are used to ensure the solution validity for equations included in test standard. The use of automated analysis tools allows the creation and practical implementation of advanced fracture mechanics test standards that capture the physics of a nonlinear fracture mechanics problem without adding undue burden or expense to the user. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.

  12. Valx: A system for extracting and structuring numeric lab test comparison statements from text

    PubMed Central

    Hao, Tianyong; Liu, Hongfang; Weng, Chunhua

    2017-01-01

    Objectives To develop an automated method for extracting and structuring numeric lab test comparison statements from text and evaluate the method using clinical trial eligibility criteria text. Methods Leveraging semantic knowledge from the Unified Medical Language System (UMLS) and domain knowledge acquired from the Internet, Valx takes 7 steps to extract and normalize numeric lab test expressions: 1) text preprocessing, 2) numeric, unit, and comparison operator extraction, 3) variable identification using hybrid knowledge, 4) variable - numeric association, 5) context-based association filtering, 6) measurement unit normalization, and 7) heuristic rule-based comparison statements verification. Our reference standard was the consensus-based annotation among three raters for all comparison statements for two variables, i.e., HbA1c and glucose, identified from all of Type 1 and Type 2 diabetes trials in ClinicalTrials.gov. Results The precision, recall, and F-measure for structuring HbA1c comparison statements were 99.6%, 98.1%, 98.8% for Type 1 diabetes trials, and 98.8%, 96.9%, 97.8% for Type 2 Diabetes trials, respectively. The precision, recall, and F-measure for structuring glucose comparison statements were 97.3%, 94.8%, 96.1% for Type 1 diabetes trials, and 92.3%, 92.3%, 92.3% for Type 2 diabetes trials, respectively. Conclusions Valx is effective at extracting and structuring free-text lab test comparison statements in clinical trial summaries. Future studies are warranted to test its generalizability beyond eligibility criteria text. The open-source Valx enables its further evaluation and continued improvement among the collaborative scientific community. PMID:26940748

  13. Performance issues for iterative solvers in device simulation

    NASA Technical Reports Server (NTRS)

    Fan, Qing; Forsyth, P. A.; Mcmacken, J. R. F.; Tang, Wei-Pai

    1994-01-01

    Due to memory limitations, iterative methods have become the method of choice for large scale semiconductor device simulation. However, it is well known that these methods still suffer from reliability problems. The linear systems which appear in numerical simulation of semiconductor devices are notoriously ill-conditioned. In order to produce robust algorithms for practical problems, careful attention must be given to many implementation issues. This paper concentrates on strategies for developing robust preconditioners. In addition, effective data structures and convergence check issues are also discussed. These algorithms are compared with a standard direct sparse matrix solver on a variety of problems.

  14. Numerical method for high accuracy index of refraction estimation for spectro-angular surface plasmon resonance systems.

    PubMed

    Alleyne, Colin J; Kirk, Andrew G; Chien, Wei-Yin; Charette, Paul G

    2008-11-24

    An eigenvector analysis based algorithm is presented for estimating refractive index changes from 2-D reflectance/dispersion images obtained with spectro-angular surface plasmon resonance systems. High resolution over a large dynamic range can be achieved simultaneously. The method performs well in simulations with noisy data maintaining an error of less than 10(-8) refractive index units with up to six bits of noise on 16 bit quantized image data. Experimental measurements show that the method results in a much higher signal to noise ratio than the standard 1-D weighted centroid dip finding algorithm.

  15. A new multigroup method for cross-sections that vary rapidly in energy

    DOE PAGES

    Haut, Terry Scot; Ahrens, Cory D.; Jonko, Alexandra; ...

    2016-11-04

    Here, we present a numerical method for solving the time-independent thermal radiative transfer (TRT) equation or the neutron transport (NT) equation when the opacity (cross-section) varies rapidly in frequency (energy) on the microscale ε; ε corresponds to the characteristic spacing between absorption lines or resonances, and is much smaller than the macroscopic frequency (energy) variation of interest. The approach is based on a rigorous homogenization of the TRT/NT equation in the frequency (energy) variable. Discretization of the homogenized TRT/NT equation results in a multigroup-type system, and can therefore be solved by standard methods.

  16. Comparitive Study of High-Order Positivity-Preserving WENO Schemes

    NASA Technical Reports Server (NTRS)

    Kotov, D. V.; Yee, H. C.; Sjogreen, B.

    2014-01-01

    In gas dynamics and magnetohydrodynamics flows, physically, the density ? and the pressure p should both be positive. In a standard conservative numerical scheme, however, the computed internal energy is The ideas of Zhang & Shu (2012) and Hu et al. (2012) precisely address the aforementioned issue. Zhang & Shu constructed a new conservative positivity-preserving procedure to preserve positive density and pressure for high-order Weighted Essentially Non-Oscillatory (WENO) schemes by the Lax-Friedrichs flux (WENO/LLF). In general, WENO/LLF is obtained by subtracting the kinetic energy from the total energy, resulting in a computed p that may be negative. Examples are problems in which the dominant energy is kinetic. Negative ? may often emerge in computing blast waves. In such situations the computed eigenvalues of the Jacobian will become imaginary. Consequently, the initial value problem for the linearized system will be ill posed. This explains why failure of preserving positivity of density or pressure may cause blow-ups of the numerical algorithm. The adhoc methods in numerical strategy which modify the computed negative density and/or the computed negative pressure to be positive are neither a conservative cure nor a stable solution. Conservative positivity-preserving schemes are more appropriate for such flow problems. too dissipative for flows such as turbulence with strong shocks computed in direct numerical simulations (DNS) and large eddy simulations (LES). The new conservative positivity-preserving procedure proposed in Hu et al. (2012) can be used with any high-order shock-capturing scheme, including high-order WENO schemes using the Roe's flux (WENO/Roe). The goal of this study is to compare the results obtained by non-positivity-preserving methods with the recently developed positivity-preserving schemes for representative test cases. In particular the more di cult 3D Noh and Sedov problems are considered. These test cases are chosen because of the negative pressure/density most often exhibited by standard high-order shock-capturing schemes. The simulation of a hypersonic nonequilibrium viscous shock tube that is related to the NASA Electric Arc Shock Tube (EAST) is also included. EAST is a high-temperature and high Mach number viscous nonequilibrium ow consisting of 13 species. In addition, as most common shock-capturing schemes have been developed for problems without source terms, when applied to problems with nonlinear and/or sti source terms these methods can result in spurious solutions, even when solving a conservative system of equations with a conservative scheme. This kind of behavior can be observed even for a scalar case as well as for the case consisting of two species and one reaction.. This EAST example indicated that standard high-order shock-capturing methods exhibit instability of density/pressure in addition to grid-dependent discontinuity locations with insufficient grid points. The evaluation of these test cases is based on the stability of the numerical schemes together with the accuracy of the obtained solutions.

  17. Self-similar solutions to isothermal shock problems

    NASA Astrophysics Data System (ADS)

    Deschner, Stephan C.; Illenseer, Tobias F.; Duschl, Wolfgang J.

    We investigate exact solutions for isothermal shock problems in different one-dimensional geometries. These solutions are given as analytical expressions if possible, or are computed using standard numerical methods for solving ordinary differential equations. We test the numerical solutions against the analytical expressions to verify the correctness of all numerical algorithms. We use similarity methods to derive a system of ordinary differential equations (ODE) yielding exact solutions for power law density distributions as initial conditions. Further, the system of ODEs accounts for implosion problems (IP) as well as explosion problems (EP) by changing the initial or boundary conditions, respectively. Taking genuinely isothermal approximations into account leads to additional insights of EPs in contrast to earlier models. We neglect a constant initial energy contribution but introduce a parameter to adjust the initial mass distribution of the system. Moreover, we show that due to this parameter a constant initial density is not allowed for isothermal EPs. Reasonable restrictions for this parameter are given. Both, the (genuinely) isothermal implosion as well as the explosion problem are solved for the first time.

  18. Virtual photons in imaginary time: Computing exact Casimir forces via standard numerical electromagnetism techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodriguez, Alejandro; Ibanescu, Mihai; Joannopoulos, J. D.

    2007-09-15

    We describe a numerical method to compute Casimir forces in arbitrary geometries, for arbitrary dielectric and metallic materials, with arbitrary accuracy (given sufficient computational resources). Our approach, based on well-established integration of the mean stress tensor evaluated via the fluctuation-dissipation theorem, is designed to directly exploit fast methods developed for classical computational electromagnetism, since it only involves repeated evaluation of the Green's function for imaginary frequencies (equivalently, real frequencies in imaginary time). We develop the approach by systematically examining various formulations of Casimir forces from the previous decades and evaluating them according to their suitability for numerical computation. We illustratemore » our approach with a simple finite-difference frequency-domain implementation, test it for known geometries such as a cylinder and a plate, and apply it to new geometries. In particular, we show that a pistonlike geometry of two squares sliding between metal walls, in both two and three dimensions with both perfect and realistic metallic materials, exhibits a surprising nonmonotonic ''lateral'' force from the walls.« less

  19. An efficient computational method for the approximate solution of nonlinear Lane-Emden type equations arising in astrophysics

    NASA Astrophysics Data System (ADS)

    Singh, Harendra

    2018-04-01

    The key purpose of this article is to introduce an efficient computational method for the approximate solution of the homogeneous as well as non-homogeneous nonlinear Lane-Emden type equations. Using proposed computational method given nonlinear equation is converted into a set of nonlinear algebraic equations whose solution gives the approximate solution to the Lane-Emden type equation. Various nonlinear cases of Lane-Emden type equations like standard Lane-Emden equation, the isothermal gas spheres equation and white-dwarf equation are discussed. Results are compared with some well-known numerical methods and it is observed that our results are more accurate.

  20. Algorithm-enabled partial-angular-scan configurations for dual-energy CT.

    PubMed

    Chen, Buxin; Zhang, Zheng; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan

    2018-05-01

    We seek to investigate an optimization-based one-step method for image reconstruction that explicitly compensates for nonlinear spectral response (i.e., the beam-hardening effect) in dual-energy CT, to investigate the feasibility of the one-step method for enabling two dual-energy partial-angular-scan configurations, referred to as the short- and half-scan configurations, on standard CT scanners without involving additional hardware, and to investigate the potential of the short- and half-scan configurations in reducing imaging dose and scan time in a single-kVp-switch full-scan configuration in which two full rotations are made for collection of dual-energy data. We use the one-step method to reconstruct images directly from dual-energy data through solving a nonconvex optimization program that specifies the images to be reconstructed in dual-energy CT. Dual-energy full-scan data are generated from numerical phantoms and collected from physical phantoms with the standard single-kVp-switch full-scan configuration, whereas dual-energy short- and half-scan data are extracted from the corresponding full-scan data. Besides visual inspection and profile-plot comparison, the reconstructed images are analyzed also in quantitative studies based upon tasks of linear-attenuation-coefficient and material-concentration estimation and of material differentiation. Following the performance of a computer-simulation study to verify that the one-step method can reconstruct numerically accurately basis and monochromatic images of numerical phantoms, we reconstruct basis and monochromatic images by using the one-step method from real data of physical phantoms collected with the full-, short-, and half-scan configurations. Subjective inspection based upon visualization and profile-plot comparison reveals that monochromatic images, which are used often in practical applications, reconstructed from the full-, short-, and half-scan data are largely visually comparable except for some differences in texture details. Moreover, quantitative studies based upon tasks of linear-attenuation-coefficient and material-concentration estimation and of material differentiation indicate that the short- and half-scan configurations yield results in close agreement with the ground-truth information and that of the full-scan configuration. The one-step method considered can compensate effectively for the nonlinear spectral response in full- and partial-angular-scan dual-energy CT. It can be exploited for enabling partial-angular-scan configurations on standard CT scanner without involving additional hardware. Visual inspection and quantitative studies reveal that, with the one-step method, partial-angular-scan configurations considered can perform at a level comparable to that of the full-scan configuration, thus suggesting the potential of the two partial-angular-scan configurations in reducing imaging dose and scan time in the standard single-kVp-switch full-scan CT in which two full rotations are performed. The work also yields insights into the investigation and design of other nonstandard scan configurations of potential practical significance in dual-energy CT. © 2018 American Association of Physicists in Medicine.

  1. Numerical method for solution of systems of non-stationary spatially one-dimensional nonlinear differential equations

    NASA Technical Reports Server (NTRS)

    Morozov, S. K.; Krasitskiy, O. P.

    1978-01-01

    A computational scheme and a standard program is proposed for solving systems of nonstationary spatially one-dimensional nonlinear differential equations using Newton's method. The proposed scheme is universal in its applicability and its reduces to a minimum the work of programming. The program is written in the FORTRAN language and can be used without change on electronic computers of type YeS and BESM-6. The standard program described permits the identification of nonstationary (or stationary) solutions to systems of spatially one-dimensional nonlinear (or linear) partial differential equations. The proposed method may be used to solve a series of geophysical problems which take chemical reactions, diffusion, and heat conductivity into account, to evaluate nonstationary thermal fields in two-dimensional structures when in one of the geometrical directions it can take a small number of discrete levels, and to solve problems in nonstationary gas dynamics.

  2. The Effects of Co-Teaching on Regular Education and Special Education Students' Standardized Communication Arts Test Scores in a Suburban Midwest Middle School

    ERIC Educational Resources Information Center

    O'Neal, Angeline N.

    2013-01-01

    In response to numerous mandates in the field of education, schools have found it imperative to ensure that teachers are incorporating effective instructional methods which meet the diverse needs of student populations within a single classroom. The co-teaching model of instruction is just one way educators have chosen to lead classroom…

  3. [Metacarpophalangeal and carpal numeric indices to calculate bone age and predict adult size].

    PubMed

    Ebrí Torné, B; Ebrí Verde, I

    2012-04-01

    This work presents new numerical methods from the meta-carpal-phalangeal and carpal indexes, for calculating bone age. In addition, these new methods enable the adult height to be predicted using multiple regression equations. The longitudinal case series studied included 160 healthy children from Zaragoza, of both genders, aged between 6 months and 20 years, and studied annually, including the radiological study. For the statistical analysis the statistical package "Statistix", as well as the Excel program, was used. The new indexes are closely co-related to the chronological age, thus leading to predictive equations for the calculation of the bone age of children up to 20 years of age. In addition, it presents particular equations for up to 4 years of age, in order to optimise the diagnosis at these early ages. The resulting bones ages can be applied to numerical standard deviation tables, as well as to an equivalences chart, which directly gives us the ossification diagnosis. The predictive equations of adult height allow a reliable forecast of the future height of the studied child. These forecasts, analysed by the Student test did not show significant differences as regards the adult height that children of the case series finally achieved. The results can be obtained with a pocket calculator or through free software available for the reader. For the first time, and using a centre-developed and non-foreign methods, bones age standards and adult height predictive equations for the study of children, are presented. We invite the practitioner to use these meta-carpal-phalangeal and carpal methods in order to achieve the necessary experience to apply it to a healthy population and those with different disorders. Copyright © 2011 Asociación Española de Pediatría. Published by Elsevier Espana. All rights reserved.

  4. Advanced lattice Boltzmann scheme for high-Reynolds-number magneto-hydrodynamic flows

    NASA Astrophysics Data System (ADS)

    De Rosis, Alessandro; Lévêque, Emmanuel; Chahine, Robert

    2018-06-01

    Is the lattice Boltzmann method suitable to investigate numerically high-Reynolds-number magneto-hydrodynamic (MHD) flows? It is shown that a standard approach based on the Bhatnagar-Gross-Krook (BGK) collision operator rapidly yields unstable simulations as the Reynolds number increases. In order to circumvent this limitation, it is here suggested to address the collision procedure in the space of central moments for the fluid dynamics. Therefore, an hybrid lattice Boltzmann scheme is introduced, which couples a central-moment scheme for the velocity with a BGK scheme for the space-and-time evolution of the magnetic field. This method outperforms the standard approach in terms of stability, allowing us to simulate high-Reynolds-number MHD flows with non-unitary Prandtl number while maintaining accuracy and physical consistency.

  5. A study of the viscous and nonadiabatic flow in radial turbines

    NASA Technical Reports Server (NTRS)

    Khalil, I.; Tabakoff, W.

    1981-01-01

    A method for analyzing the viscous nonadiabatic flow within turbomachine rotors is presented. The field analysis is based upon the numerical integration of the incompressible Navier-Stokes equations together with the energy equation over the rotors blade-to-blade stream channels. The numerical code used to solve the governing equations employs a nonorthogonal boundary fitted coordinate system that suits the most complicated blade geometries. Effects of turbulence are modeled with two equations; one expressing the development of the turbulence kinetic energy and the other its dissipation rate. The method of analysis is applied to a radial inflow turbine. The solution obtained indicates the severity of the complex interaction mechanism that occurs between different flow regimes (i.e., boundary layers, recirculating eddies, separation zones, etc.). Comparison with nonviscous flow solutions tend to justify strongly the inadequacy of using the latter with standard boundary layer techniques to obtain viscous flow details within turbomachine rotors. Capabilities and limitations of the present method of analysis are discussed.

  6. H∞ memory feedback control with input limitation minimization for offshore jacket platform stabilization

    NASA Astrophysics Data System (ADS)

    Yang, Jia Sheng

    2018-06-01

    In this paper, we investigate a H∞ memory controller with input limitation minimization (HMCIM) for offshore jacket platforms stabilization. The main objective of this study is to reduce the control consumption as well as protect the actuator when satisfying the requirement of the system performance. First, we introduce a dynamic model of offshore platform with low order main modes based on mode reduction method in numerical analysis. Then, based on H∞ control theory and matrix inequality techniques, we develop a novel H∞ memory controller with input limitation. Furthermore, a non-convex optimization model to minimize input energy consumption is proposed. Since it is difficult to solve this non-convex optimization model by optimization algorithm, we use a relaxation method with matrix operations to transform this non-convex optimization model to be a convex optimization model. Thus, it could be solved by a standard convex optimization solver in MATLAB or CPLEX. Finally, several numerical examples are given to validate the proposed models and methods.

  7. Computing black hole partition functions from quasinormal modes

    DOE PAGES

    Arnold, Peter; Szepietowski, Phillip; Vaman, Diana

    2016-07-07

    We propose a method of computing one-loop determinants in black hole space-times (with emphasis on asymptotically anti-de Sitter black holes) that may be used for numerics when completely-analytic results are unattainable. The method utilizes the expression for one-loop determinants in terms of quasinormal frequencies determined by Denef, Hartnoll and Sachdev in [1]. A numerical evaluation must face the fact that the sum over the quasinormal modes, indexed by momentum and overtone numbers, is divergent. A necessary ingredient is then a regularization scheme to handle the divergent contributions of individual fixed-momentum sectors to the partition function. To this end, we formulatemore » an effective two-dimensional problem in which a natural refinement of standard heat kernel techniques can be used to account for contributions to the partition function at fixed momentum. We test our method in a concrete case by reproducing the scalar one-loop determinant in the BTZ black hole background. Furthermore, we then discuss the application of such techniques to more complicated spacetimes.« less

  8. A solution to neural field equations by a recurrent neural network method

    NASA Astrophysics Data System (ADS)

    Alharbi, Abir

    2012-09-01

    Neural field equations (NFE) are used to model the activity of neurons in the brain, it is introduced from a single neuron 'integrate-and-fire model' starting point. The neural continuum is spatially discretized for numerical studies, and the governing equations are modeled as a system of ordinary differential equations. In this article the recurrent neural network approach is used to solve this system of ODEs. This consists of a technique developed by combining the standard numerical method of finite-differences with the Hopfield neural network. The architecture of the net, energy function, updating equations, and algorithms are developed for the NFE model. A Hopfield Neural Network is then designed to minimize the energy function modeling the NFE. Results obtained from the Hopfield-finite-differences net show excellent performance in terms of accuracy and speed. The parallelism nature of the Hopfield approaches may make them easier to implement on fast parallel computers and give them the speed advantage over the traditional methods.

  9. GNuMe: A Galerkin-based Numerical Modeliing Environment for modeling geophysical fluid dynamics applications ranging from the Atmosphere to the Ocean

    NASA Astrophysics Data System (ADS)

    Giraldo, Francis; Abdi, Daniel; Kopera, Michal

    2017-04-01

    We have built a Galerkin-based Numerical Modeling Environment (GNuMe) for non hydrostatic atmospheric and ocean processes. GNuMe uses continuous Galerkin and Discontinuous Galerkin (CG/DG) discetizations as well as non-conforming adaptive mesh refinement (AMR), along with advanced time-integration methods that exploits both CG/DG and AMR capabilities. GNuMe currently solves the compressible and incompressible Navier-Stokes equations, the shallow water equations (with wetting and drying), and work is underway for inclusion of other types of equations. Moreover, GNuMe can run in both 2D and 3D modes on any type of accelerator hardware such as Nvidia GPUs and Intel KNL, and on standard X86 cores. In this talk, we shall present representative solutions obtained with GNuMe and will discuss where we think such a modeling framework could fit within standard Earth Systems Models. For further information on GNuMe please visit: http://frankgiraldo.wixsite.com/mysite/gnume.

  10. An integrated approach to flood hazard assessment on alluvial fans using numerical modeling, field mapping, and remote sensing

    USGS Publications Warehouse

    Pelletier, J.D.; Mayer, L.; Pearthree, P.A.; House, P.K.; Demsey, K.A.; Klawon, J.K.; Vincent, K.R.

    2005-01-01

    Millions of people in the western United States live near the dynamic, distributary channel networks of alluvial fans where flood behavior is complex and poorly constrained. Here we test a new comprehensive approach to alluvial-fan flood hazard assessment that uses four complementary methods: two-dimensional raster-based hydraulic modeling, satellite-image change detection, fieldbased mapping of recent flood inundation, and surficial geologic mapping. Each of these methods provides spatial detail lacking in the standard method and each provides critical information for a comprehensive assessment. Our numerical model simultaneously solves the continuity equation and Manning's equation (Chow, 1959) using an implicit numerical method. It provides a robust numerical tool for predicting flood flows using the large, high-resolution Digital Elevation Models (DEMs) necessary to resolve the numerous small channels on the typical alluvial fan. Inundation extents and flow depths of historic floods can be reconstructed with the numerical model and validated against field- and satellite-based flood maps. A probabilistic flood hazard map can also be constructed by modeling multiple flood events with a range of specified discharges. This map can be used in conjunction with a surficial geologic map to further refine floodplain delineation on fans. To test the accuracy of the numerical model, we compared model predictions of flood inundation and flow depths against field- and satellite-based flood maps for two recent extreme events on the southern Tortolita and Harquahala piedmonts in Arizona. Model predictions match the field- and satellite-based maps closely. Probabilistic flood hazard maps based on the 10 yr, 100 yr, and maximum floods were also constructed for the study areas using stream gage records and paleoflood deposits. The resulting maps predict spatially complex flood hazards that strongly reflect small-scale topography and are consistent with surficial geology. In contrast, FEMA Flood Insurance Rate Maps (FIRMs) based on the FAN model predict uniformly high flood risk across the study areas without regard for small-scale topography and surficial geology. ?? 2005 Geological Society of America.

  11. Finite difference computation of Casimir forces

    NASA Astrophysics Data System (ADS)

    Pinto, Fabrizio

    2016-09-01

    In this Invited paper, we begin by a historical introduction to provide a motivation for the classical problems of interatomic force computation and associated challenges. This analysis will lead us from early theoretical and experimental accomplishments to the integration of these fascinating interactions into the operation of realistic, next-generation micro- and nanodevices both for the advanced metrology of fundamental physical processes and in breakthrough industrial applications. Among several powerful strategies enabling vastly enhanced performance and entirely novel technological capabilities, we shall specifically consider Casimir force time-modulation and the adoption of non-trivial geometries. As to the former, the ability to alter the magnitude and sign of the Casimir force will be recognized as a crucial principle to implement thermodynamical nano-engines. As to the latter, we shall first briefly review various reported computational approaches. We shall then discuss the game-changing discovery, in the last decade, that standard methods of numerical classical electromagnetism can be retooled to formulate the problem of Casimir force computation in arbitrary geometries. This remarkable development will be practically illustrated by showing that such an apparently elementary method as standard finite-differencing can be successfully employed to numerically recover results known from the Lifshitz theory of dispersion forces in the case of interacting parallel-plane slabs. Other geometries will be also be explored and consideration given to the potential of non-standard finite-difference methods. Finally, we shall introduce problems at the computational frontier, such as those including membranes deformed by Casimir forces and the effects of anisotropic materials. Conclusions will highlight the dramatic transition from the enduring perception of this field as an exotic application of quantum electrodynamics to the recent demonstration of a human climbing vertically on smooth glass.

  12. Analytical Computation of Effective Grid Parameters for the Finite-Difference Seismic Waveform Modeling With the PREM, IASP91, SP6, and AK135

    NASA Astrophysics Data System (ADS)

    Toyokuni, G.; Takenaka, H.

    2007-12-01

    We propose a method to obtain effective grid parameters for the finite-difference (FD) method with standard Earth models using analytical ways. In spite of the broad use of the heterogeneous FD formulation for seismic waveform modeling, accurate treatment of material discontinuities inside the grid cells has been a serious problem for many years. One possible way to solve this problem is to introduce effective grid elastic moduli and densities (effective parameters) calculated by the volume harmonic averaging of elastic moduli and volume arithmetic averaging of density in grid cells. This scheme enables us to put a material discontinuity into an arbitrary position in the spatial grids. Most of the methods used for synthetic seismogram calculation today receives the blessing of the standard Earth models, such as the PREM, IASP91, SP6, and AK135, represented as functions of normalized radius. For the FD computation of seismic waveform with such models, we first need accurate treatment of material discontinuities in radius. This study provides a numerical scheme for analytical calculations of the effective parameters for an arbitrary spatial grids in radial direction as to these major four standard Earth models making the best use of their functional features. This scheme can analytically obtain the integral volume averages through partial fraction decompositions (PFDs) and integral formulae. We have developed a FORTRAN subroutine to perform the computations, which is opened to utilization in a large variety of FD schemes ranging from 1-D to 3-D, with conventional- and staggered-grids. In the presentation, we show some numerical examples displaying the accuracy of the FD synthetics simulated with the analytical effective parameters.

  13. A numerical method for solving the 3D unsteady incompressible Navier Stokes equations in curvilinear domains with complex immersed boundaries

    NASA Astrophysics Data System (ADS)

    Ge, Liang; Sotiropoulos, Fotis

    2007-08-01

    A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g. the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [A. Gilmanov, F. Sotiropoulos, A hybrid cartesian/immersed boundary method for simulating flows with 3d, geometrically complex, moving bodies, Journal of Computational Physics 207 (2005) 457-492.]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus.

  14. Comparison of two methods to determine fan performance curves using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Onma, Patinya; Chantrasmi, Tonkid

    2018-01-01

    This work investigates a systematic numerical approach that employs Computational Fluid Dynamics (CFD) to obtain performance curves of a backward-curved centrifugal fan. Generating the performance curves requires a number of three-dimensional simulations with varying system loads at a fixed rotational speed. Two methods were used and their results compared to experimental data. The first method incrementally changes the mass flow late through the inlet boundary condition while the second method utilizes a series of meshes representing the physical damper blade at various angles. The generated performance curves from both methods are compared with an experiment setup in accordance with the AMCA fan performance testing standard.

  15. Systems of Inhomogeneous Linear Equations

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Many problems in physics and especially computational physics involve systems of linear equations which arise e.g. from linearization of a general nonlinear problem or from discretization of differential equations. If the dimension of the system is not too large standard methods like Gaussian elimination or QR decomposition are sufficient. Systems with a tridiagonal matrix are important for cubic spline interpolation and numerical second derivatives. They can be solved very efficiently with a specialized Gaussian elimination method. Practical applications often involve very large dimensions and require iterative methods. Convergence of Jacobi and Gauss-Seidel methods is slow and can be improved by relaxation or over-relaxation. An alternative for large systems is the method of conjugate gradients.

  16. A study of methods to predict and measure the transmission of sound through the walls of light aircraft. Integration of certain singular boundary element integrals for applications in linear acoustics

    NASA Technical Reports Server (NTRS)

    Zimmerle, D.; Bernhard, R. J.

    1985-01-01

    An alternative method for performing singular boundary element integrals for applications in linear acoustics is discussed. The method separates the integral of the characteristic solution into a singular and nonsingular part. The singular portion is integrated with a combination of analytic and numerical techniques while the nonsingular portion is integrated with standard Gaussian quadrature. The method may be generalized to many types of subparametric elements. The integrals over elements containing the root node are considered, and the characteristic solution for linear acoustic problems are examined. The method may be generalized to most characteristic solutions.

  17. Zirconium determination by cooling curve analysis during the pyroprocessing of used nuclear fuel

    NASA Astrophysics Data System (ADS)

    Westphal, B. R.; Price, J. C.; Bateman, K. J.; Marsden, K. C.

    2015-02-01

    An alternative method to sampling and chemical analyses has been developed to monitor the concentration of zirconium in real-time during the casting of uranium products from the pyroprocessing of used nuclear fuel. The method utilizes the solidification characteristics of the uranium products to determine zirconium levels based on standard cooling curve analyses and established binary phase diagram data. Numerous uranium products have been analyzed for their zirconium content and compared against measured zirconium data. From this data, the following equation was derived for the zirconium content of uranium products:

  18. Measurement of Antenna Bore-Sight Gain

    NASA Technical Reports Server (NTRS)

    Fortinberry, Jarrod; Shumpert, Thomas

    2016-01-01

    The absolute or free-field gain of a simple antenna can be approximated using standard antenna theory formulae or for a more accurate prediction, numerical methods may be employed to solve for antenna parameters including gain. Both of these methods will result in relatively reasonable estimates but in practice antenna gain is usually verified and documented via measurements and calibration. In this paper, a relatively simple and low-cost, yet effective means of determining the bore-sight free-field gain of a VHF/UHF antenna is proposed by using the Brewster angle relationship.

  19. Divergence Free High Order Filter Methods for Multiscale Non-ideal MHD Flows

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, Bjoern

    2003-01-01

    Low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous MHD flows has been constructed. Several variants of the filter approach that cater to different flow types are proposed. These filters provide a natural and efficient way for the minimization of the divergence of the magnetic field (Delta . B) numerical error in the sense that no standard divergence cleaning is required. For certain 2-D MHD test problems, divergence free preservation of the magnetic fields of these filter schemes has been achieved.

  20. High Order Filter Methods for the Non-ideal Compressible MHD Equations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, Bjoern

    2003-01-01

    The generalization of a class of low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous gas dynamic flows to compressible MHD equations for structured curvilinear grids has been achieved. The new scheme is shown to provide a natural and efficient way for the minimization of the divergence of the magnetic field numerical error. Standard divergence cleaning is not required by the present filter approach. For certain non-ideal MHD test cases, divergence free preservation of the magnetic fields has been achieved.

  1. Boundary-element modelling of dynamics in external poroviscoelastic problems

    NASA Astrophysics Data System (ADS)

    Igumnov, L. A.; Litvinchuk, S. Yu; Ipatov, A. A.; Petrov, A. N.

    2018-04-01

    A problem of a spherical cavity in porous media is considered. Porous media are assumed to be isotropic poroelastic or isotropic poroviscoelastic. The poroviscoelastic formulation is treated as a combination of Biot’s theory of poroelasticity and elastic-viscoelastic correspondence principle. Such viscoelastic models as Kelvin–Voigt, Standard linear solid, and a model with weakly singular kernel are considered. Boundary field study is employed with the help of the boundary element method. The direct approach is applied. The numerical scheme is based on the collocation method, regularized boundary integral equation, and Radau stepped scheme.

  2. Divergence Free High Order Filter Methods for the Compressible MHD Equations

    NASA Technical Reports Server (NTRS)

    Yea, H. C.; Sjoegreen, Bjoern

    2003-01-01

    The generalization of a class of low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous gas dynamic flows to compressible MHD equations for structured curvilinear grids has been achieved. The new scheme is shown to provide a natural and efficient way for the minimization of the divergence of the magnetic field numerical error. Standard diver- gence cleaning is not required by the present filter approach. For certain MHD test cases, divergence free preservation of the magnetic fields has been achieved.

  3. Efficient calculation of the polarizability: a simplified effective-energy technique

    NASA Astrophysics Data System (ADS)

    Berger, J. A.; Reining, L.; Sottile, F.

    2012-09-01

    In a recent publication [J.A. Berger, L. Reining, F. Sottile, Phys. Rev. B 82, 041103(R) (2010)] we introduced the effective-energy technique to calculate in an accurate and numerically efficient manner the GW self-energy as well as the polarizability, which is required to evaluate the screened Coulomb interaction W. In this work we show that the effective-energy technique can be used to further simplify the expression for the polarizability without a significant loss of accuracy. In contrast to standard sum-over-state methods where huge summations over empty states are required, our approach only requires summations over occupied states. The three simplest approximations we obtain for the polarizability are explicit functionals of an independent- or quasi-particle one-body reduced density matrix. We provide evidence of the numerical accuracy of this simplified effective-energy technique as well as an analysis of our method.

  4. New method to evaluate the 7Li(p, n)7Be reaction near threshold

    NASA Astrophysics Data System (ADS)

    Herrera, María S.; Moreno, Gustavo A.; Kreiner, Andrés J.

    2015-04-01

    In this work a complete description of the 7Li(p, n)7Be reaction near threshold is given using center-of-mass and relative coordinates. It is shown that this standard approach, not used before in this context, leads to a simple mathematical representation which gives easy access to all relevant quantities in the reaction and allows a precise numerical implementation. It also allows in a simple way to include proton beam-energy spread affects. The method, implemented as a C++ code, was validated both with numerical and experimental data finding a good agreement. This tool is also used here to analyze scattered published measurements such as (p, n) cross sections, differential and total neutron yields for thick targets. Using these data we derive a consistent set of parameters to evaluate neutron production near threshold. Sensitivity of the results to data uncertainty and the possibility of incorporating new measurements are also discussed.

  5. An Eulerian/Lagrangian coupling procedure for three-dimensional vortical flows

    NASA Technical Reports Server (NTRS)

    Felici, Helene M.; Drela, Mark

    1993-01-01

    A coupled Eulerian/Lagrangian method is presented for the reduction of numerical diffusion observed in solutions of 3D vortical flows using standard Eulerian finite-volume time-marching procedures. A Lagrangian particle tracking method, added to the Eulerian time-marching procedure, provides a correction of the Eulerian solution. In turn, the Eulerian solution is used to integrate the Lagrangian state-vector along the particles trajectories. While the Eulerian solution ensures the conservation of mass and sets the pressure field, the particle markers describe accurately the convection properties and enhance the vorticity and entropy capturing capabilities of the Eulerian solver. The Eulerian/Lagrangian coupling strategies are discussed and the combined scheme is tested on a constant stagnation pressure flow in a 90 deg bend and on a swirling pipe flow. As the numerical diffusion is reduced when using the Lagrangian correction, a vorticity gradient augmentation is identified as a basic problem of this inviscid calculation.

  6. The mimetic finite difference method for the Landau–Lifshitz equation

    DOE PAGES

    Kim, Eugenia Hail; Lipnikov, Konstantin Nikolayevich

    2017-01-01

    The Landau–Lifshitz equation describes the dynamics of the magnetization inside ferromagnetic materials. This equation is highly nonlinear and has a non-convex constraint (the magnitude of the magnetization is constant) which poses interesting challenges in developing numerical methods. We develop and analyze explicit and implicit mimetic finite difference schemes for this equation. These schemes work on general polytopal meshes which provide enormous flexibility to model magnetic devices with various shapes. A projection on the unit sphere is used to preserve the magnitude of the magnetization. We also provide a proof that shows the exchange energy is decreasing in certain conditions. Themore » developed schemes are tested on general meshes that include distorted and randomized meshes. As a result, the numerical experiments include a test proposed by the National Institute of Standard and Technology and a test showing formation of domain wall structures in a thin film.« less

  7. The mimetic finite difference method for the Landau–Lifshitz equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Eugenia Hail; Lipnikov, Konstantin Nikolayevich

    The Landau–Lifshitz equation describes the dynamics of the magnetization inside ferromagnetic materials. This equation is highly nonlinear and has a non-convex constraint (the magnitude of the magnetization is constant) which poses interesting challenges in developing numerical methods. We develop and analyze explicit and implicit mimetic finite difference schemes for this equation. These schemes work on general polytopal meshes which provide enormous flexibility to model magnetic devices with various shapes. A projection on the unit sphere is used to preserve the magnitude of the magnetization. We also provide a proof that shows the exchange energy is decreasing in certain conditions. Themore » developed schemes are tested on general meshes that include distorted and randomized meshes. As a result, the numerical experiments include a test proposed by the National Institute of Standard and Technology and a test showing formation of domain wall structures in a thin film.« less

  8. Analysis and Assessment of Environmental Load of Vending Machines by a LCA Method, and Eco-Improvement Effect

    NASA Astrophysics Data System (ADS)

    Kimura, Yukio; Sadamichi, Yucho; Maruyama, Naoki; Kato, Seizo

    These days the environmental impact due to vending machines'(VM) diffusion has greatly been discussed. This paper describes the numerical evaluation of the environmental impact by using the LCA (Life Cycle Assessment) scheme and then proposes eco-improvements' strategy toward environmentally conscious products(ECP). A new objective and universal consolidated method for the LCA-evaluation, so-called LCA-NETS(Numerical Eco-load Standardization ) developed by the authors is applied to the present issue. As a result, the environmental loads at the 5years' operation and the material procurement stages are found to dominate others over the life cycle. Further eco-improvement is realized by following the order of the LCA-NETS magnitude; namely, energy saving, materials reducing, parts' re-using, and replacing with low environmental load material. Above all, parts' re-using is specially recommendable for significant reduction of the environmental loads toward ECP.

  9. A Fractional PDE Approach to Turbulent Mixing; Part II: Numerical Simulation

    NASA Astrophysics Data System (ADS)

    Samiee, Mehdi; Zayernouri, Mohsen

    2016-11-01

    We propose a generalizing fractional order transport model of advection-diffusion kind with fractional time- and space-derivatives, governing the evolution of passive scalar turbulence. This approach allows one to incorporate the nonlocal and memory effects in the underlying anomalous diffusion i.e., sub-to-standard diffusion to model the trapping of particles inside the eddied, and super-diffusion associated with the sudden jumps of particles from one coherent region to another. For this nonlocal model, we develop a high order numerical (spectral) method in addition to a fast solver, examined in the context of some canonical problems. PhD student, Department of Mechanical Engineering, & Department Computational Mathematics, Science, and Engineering.

  10. A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Luo; Yidong Xia; Robert Nourgaliev

    2011-05-01

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison.more » Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.« less

  11. Mode Identification of High-Amplitude Pressure Waves in Liquid Rocket Engines

    NASA Astrophysics Data System (ADS)

    EBRAHIMI, R.; MAZAHERI, K.; GHAFOURIAN, A.

    2000-01-01

    Identification of existing instability modes from experimental pressure measurements of rocket engines is difficult, specially when steep waves are present. Actual pressure waves are often non-linear and include steep shocks followed by gradual expansions. It is generally believed that interaction of these non-linear waves is difficult to analyze. A method of mode identification is introduced. After presumption of constituent modes, they are superposed by using a standard finite difference scheme for solution of the classical wave equation. Waves are numerically produced at each end of the combustion tube with different wavelengths, amplitudes, and phases with respect to each other. Pressure amplitude histories and phase diagrams along the tube are computed. To determine the validity of the presented method for steep non-linear waves, the Euler equations are numerically solved for non-linear waves, and negligible interactions between these waves are observed. To show the applicability of this method, other's experimental results in which modes were identified are used. Results indicate that this simple method can be used in analyzing complicated pressure signal measurements.

  12. GPUs, a New Tool of Acceleration in CFD: Efficiency and Reliability on Smoothed Particle Hydrodynamics Methods

    PubMed Central

    Crespo, Alejandro C.; Dominguez, Jose M.; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D.

    2011-01-01

    Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability. PMID:21695185

  13. Baryon Acoustic Oscillations reconstruction with pixels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Obuljen, Andrej; Villaescusa-Navarro, Francisco; Castorina, Emanuele

    2017-09-01

    Gravitational non-linear evolution induces a shift in the position of the baryon acoustic oscillations (BAO) peak together with a damping and broadening of its shape that bias and degrades the accuracy with which the position of the peak can be determined. BAO reconstruction is a technique developed to undo part of the effect of non-linearities. We present and analyse a reconstruction method that consists of displacing pixels instead of galaxies and whose implementation is easier than the standard reconstruction method. We show that this method is equivalent to the standard reconstruction technique in the limit where the number of pixelsmore » becomes very large. This method is particularly useful in surveys where individual galaxies are not resolved, as in 21cm intensity mapping observations. We validate this method by reconstructing mock pixelated maps, that we build from the distribution of matter and halos in real- and redshift-space, from a large set of numerical simulations. We find that this method is able to decrease the uncertainty in the BAO peak position by 30-50% over the typical angular resolution scales of 21 cm intensity mapping experiments.« less

  14. Theory and implementation of H-matrix based iterative and direct solvers for Helmholtz and elastodynamic oscillatory kernels

    NASA Astrophysics Data System (ADS)

    Chaillat, Stéphanie; Desiderio, Luca; Ciarlet, Patrick

    2017-12-01

    In this work, we study the accuracy and efficiency of hierarchical matrix (H-matrix) based fast methods for solving dense linear systems arising from the discretization of the 3D elastodynamic Green's tensors. It is well known in the literature that standard H-matrix based methods, although very efficient tools for asymptotically smooth kernels, are not optimal for oscillatory kernels. H2-matrix and directional approaches have been proposed to overcome this problem. However the implementation of such methods is much more involved than the standard H-matrix representation. The central questions we address are twofold. (i) What is the frequency-range in which the H-matrix format is an efficient representation for 3D elastodynamic problems? (ii) What can be expected of such an approach to model problems in mechanical engineering? We show that even though the method is not optimal (in the sense that more involved representations can lead to faster algorithms) an efficient solver can be easily developed. The capabilities of the method are illustrated on numerical examples using the Boundary Element Method.

  15. Modeling and Numerical Challenges in Eulerian-Lagrangian Computations of Shock-driven Multiphase Flows

    NASA Astrophysics Data System (ADS)

    Diggs, Angela; Balachandar, Sivaramakrishnan

    2015-06-01

    The present work addresses the numerical methods required for particle-gas and particle-particle interactions in Eulerian-Lagrangian simulations of multiphase flow. Local volume fraction as seen by each particle is the quantity of foremost importance in modeling and evaluating such interactions. We consider a general multiphase flow with a distribution of particles inside a fluid flow discretized on an Eulerian grid. Particle volume fraction is needed both as a Lagrangian quantity associated with each particle and also as an Eulerian quantity associated with the flow. In Eulerian Projection (EP) methods, the volume fraction is first obtained within each cell as an Eulerian quantity and then interpolated to each particle. In Lagrangian Projection (LP) methods, the particle volume fraction is obtained at each particle and then projected onto the Eulerian grid. Traditionally, EP methods are used in multiphase flow, but sub-grid resolution can be obtained through use of LP methods. By evaluating the total error and its components we compare the performance of EP and LP methods. The standard von Neumann error analysis technique has been adapted for rigorous evaluation of rate of convergence. The methods presented can be extended to obtain accurate field representations of other Lagrangian quantities. Most importantly, we will show that such careful attention to numerical methodologies is needed in order to capture complex shock interaction with a bed of particles. Supported by U.S. Department of Defense SMART Program and the U.S. Department of Energy PSAAP-II program under Contract No. DE-NA0002378.

  16. A Parallel Compact Multi-Dimensional Numerical Algorithm with Aeroacoustics Applications

    NASA Technical Reports Server (NTRS)

    Povitsky, Alex; Morris, Philip J.

    1999-01-01

    In this study we propose a novel method to parallelize high-order compact numerical algorithms for the solution of three-dimensional PDEs (Partial Differential Equations) in a space-time domain. For this numerical integration most of the computer time is spent in computation of spatial derivatives at each stage of the Runge-Kutta temporal update. The most efficient direct method to compute spatial derivatives on a serial computer is a version of Gaussian elimination for narrow linear banded systems known as the Thomas algorithm. In a straightforward pipelined implementation of the Thomas algorithm processors are idle due to the forward and backward recurrences of the Thomas algorithm. To utilize processors during this time, we propose to use them for either non-local data independent computations, solving lines in the next spatial direction, or local data-dependent computations by the Runge-Kutta method. To achieve this goal, control of processor communication and computations by a static schedule is adopted. Thus, our parallel code is driven by a communication and computation schedule instead of the usual "creative, programming" approach. The obtained parallelization speed-up of the novel algorithm is about twice as much as that for the standard pipelined algorithm and close to that for the explicit DRP algorithm.

  17. Multi-fidelity uncertainty quantification in large-scale predictive simulations of turbulent flow

    NASA Astrophysics Data System (ADS)

    Geraci, Gianluca; Jofre-Cruanyes, Lluis; Iaccarino, Gianluca

    2017-11-01

    The performance characterization of complex engineering systems often relies on accurate, but computationally intensive numerical simulations. It is also well recognized that in order to obtain a reliable numerical prediction the propagation of uncertainties needs to be included. Therefore, Uncertainty Quantification (UQ) plays a fundamental role in building confidence in predictive science. Despite the great improvement in recent years, even the more advanced UQ algorithms are still limited to fairly simplified applications and only moderate parameter dimensionality. Moreover, in the case of extremely large dimensionality, sampling methods, i.e. Monte Carlo (MC) based approaches, appear to be the only viable alternative. In this talk we describe and compare a family of approaches which aim to accelerate the convergence of standard MC simulations. These methods are based on hierarchies of generalized numerical resolutions (multi-level) or model fidelities (multi-fidelity), and attempt to leverage the correlation between Low- and High-Fidelity (HF) models to obtain a more accurate statistical estimator without introducing additional HF realizations. The performance of these methods are assessed on an irradiated particle laden turbulent flow (PSAAP II solar energy receiver). This investigation was funded by the United States Department of Energy's (DoE) National Nuclear Security Administration (NNSA) under the Predicitive Science Academic Alliance Program (PSAAP) II at Stanford University.

  18. A high-resolution Godunov method for compressible multi-material flow on overlapping grids

    NASA Astrophysics Data System (ADS)

    Banks, J. W.; Schwendeman, D. W.; Kapila, A. K.; Henshaw, W. D.

    2007-04-01

    A numerical method is described for inviscid, compressible, multi-material flow in two space dimensions. The flow is governed by the multi-material Euler equations with a general mixture equation of state. Composite overlapping grids are used to handle complex flow geometry and block-structured adaptive mesh refinement (AMR) is used to locally increase grid resolution near shocks and material interfaces. The discretization of the governing equations is based on a high-resolution Godunov method, but includes an energy correction designed to suppress numerical errors that develop near a material interface for standard, conservative shock-capturing schemes. The energy correction is constructed based on a uniform-pressure-velocity flow and is significant only near the captured interface. A variety of two-material flows are presented to verify the accuracy of the numerical approach and to illustrate its use. These flows assume an equation of state for the mixture based on the Jones-Wilkins-Lee (JWL) forms for the components. This equation of state includes a mixture of ideal gases as a special case. Flow problems considered include unsteady one-dimensional shock-interface collision, steady interaction of a planar interface and an oblique shock, planar shock interaction with a collection of gas-filled cylindrical inhomogeneities, and the impulsive motion of the two-component mixture in a rigid cylindrical vessel.

  19. Numerical dissipation vs. subgrid-scale modelling for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Dairay, Thibault; Lamballais, Eric; Laizet, Sylvain; Vassilicos, John Christos

    2017-05-01

    This study presents an alternative way to perform large eddy simulation based on a targeted numerical dissipation introduced by the discretization of the viscous term. It is shown that this regularisation technique is equivalent to the use of spectral vanishing viscosity. The flexibility of the method ensures high-order accuracy while controlling the level and spectral features of this purely numerical viscosity. A Pao-like spectral closure based on physical arguments is used to scale this numerical viscosity a priori. It is shown that this way of approaching large eddy simulation is more efficient and accurate than the use of the very popular Smagorinsky model in standard as well as in dynamic version. The main strength of being able to correctly calibrate numerical dissipation is the possibility to regularise the solution at the mesh scale. Thanks to this property, it is shown that the solution can be seen as numerically converged. Conversely, the two versions of the Smagorinsky model are found unable to ensure regularisation while showing a strong sensitivity to numerical errors. The originality of the present approach is that it can be viewed as implicit large eddy simulation, in the sense that the numerical error is the source of artificial dissipation, but also as explicit subgrid-scale modelling, because of the equivalence with spectral viscosity prescribed on a physical basis.

  20. A New Numerical Scheme for Cosmic-Ray Transport

    NASA Astrophysics Data System (ADS)

    Jiang, Yan-Fei; Oh, S. Peng

    2018-02-01

    Numerical solutions of the cosmic-ray (CR) magnetohydrodynamic equations are dogged by a powerful numerical instability, which arises from the constraint that CRs can only stream down their gradient. The standard cure is to regularize by adding artificial diffusion. Besides introducing ad hoc smoothing, this has a significant negative impact on either computational cost or complexity and parallel scalings. We describe a new numerical algorithm for CR transport, with close parallels to two-moment methods for radiative transfer under the reduced speed of light approximation. It stably and robustly handles CR streaming without any artificial diffusion. It allows for both isotropic and field-aligned CR streaming and diffusion, with arbitrary streaming and diffusion coefficients. CR transport is handled explicitly, while source terms are handled implicitly. The overall time step scales linearly with resolution (even when computing CR diffusion) and has a perfect parallel scaling. It is given by the standard Courant condition with respect to a constant maximum velocity over the entire simulation domain. The computational cost is comparable to that of solving the ideal MHD equation. We demonstrate the accuracy and stability of this new scheme with a wide variety of tests, including anisotropic streaming and diffusion tests, CR-modified shocks, CR-driven blast waves, and CR transport in multiphase media. The new algorithm opens doors to much more ambitious and hitherto intractable calculations of CR physics in galaxies and galaxy clusters. It can also be applied to other physical processes with similar mathematical structure, such as saturated, anisotropic heat conduction.

  1. Satellite Sounder Data Assimilation for Improving Alaska Region Weather Forecast

    NASA Technical Reports Server (NTRS)

    Zhu, Jiang; Stevens, E.; Zavodsky, B. T.; Zhang, X.; Heinrichs, T.; Broderson, D.

    2014-01-01

    Data assimilation has been demonstrated very useful in improving both global and regional numerical weather prediction. Alaska has very coarser surface observation sites. On the other hand, it gets much more satellite overpass than lower 48 states. How to utilize satellite data to improve numerical prediction is one of hot topics among weather forecast community in Alaska. The Geographic Information Network of Alaska (GINA) at University of Alaska is conducting study on satellite data assimilation for WRF model. AIRS/CRIS sounder profile data are used to assimilate the initial condition for the customized regional WRF model (GINA-WRF model). Normalized standard deviation, RMSE, and correlation statistic analysis methods are applied to analyze one case of 48 hours forecasts and one month of 24-hour forecasts in order to evaluate the improvement of regional numerical model from Data assimilation. The final goal of the research is to provide improved real-time short-time forecast for Alaska regions.

  2. Numerical convergence of the self-diffusion coefficient and viscosity obtained with Thomas-Fermi-Dirac molecular dynamics.

    PubMed

    Danel, J-F; Kazandjian, L; Zérah, G

    2012-06-01

    Computations of the self-diffusion coefficient and viscosity in warm dense matter are presented with an emphasis on obtaining numerical convergence and a careful evaluation of the standard deviation. The transport coefficients are computed with the Green-Kubo relation and orbital-free molecular dynamics at the Thomas-Fermi-Dirac level. The numerical parameters are varied until the Green-Kubo integral is equal to a constant in the t→+∞ limit; the transport coefficients are deduced from this constant and not by extrapolation of the Green-Kubo integral. The latter method, which gives rise to an unknown error, is tested for the computation of viscosity; it appears that it should be used with caution. In the large domain of coupling constant considered, both the self-diffusion coefficient and viscosity turn out to be well approximated by simple analytical laws using a single effective atomic number calculated in the average-atom model.

  3. Numerical convergence of the self-diffusion coefficient and viscosity obtained with Thomas-Fermi-Dirac molecular dynamics

    NASA Astrophysics Data System (ADS)

    Danel, J.-F.; Kazandjian, L.; Zérah, G.

    2012-06-01

    Computations of the self-diffusion coefficient and viscosity in warm dense matter are presented with an emphasis on obtaining numerical convergence and a careful evaluation of the standard deviation. The transport coefficients are computed with the Green-Kubo relation and orbital-free molecular dynamics at the Thomas-Fermi-Dirac level. The numerical parameters are varied until the Green-Kubo integral is equal to a constant in the t→+∞ limit; the transport coefficients are deduced from this constant and not by extrapolation of the Green-Kubo integral. The latter method, which gives rise to an unknown error, is tested for the computation of viscosity; it appears that it should be used with caution. In the large domain of coupling constant considered, both the self-diffusion coefficient and viscosity turn out to be well approximated by simple analytical laws using a single effective atomic number calculated in the average-atom model.

  4. The SIR model of Zika virus disease outbreak in Brazil at year 2015

    NASA Astrophysics Data System (ADS)

    Aik, Lim Eng; Kiang, Lam Chee; Hong, Tan Wei; Abu, Mohd Syafarudy

    2017-05-01

    This research study demonstrates a numerical model intended for comprehension the spread of the year 2015 Zika virus disease utilizing the standard SIR framework. In modeling virulent disease dynamics, it is important to explore whether the illness spread could accomplish a pandemic level or it could be eradicated. Information from the year 2015 Zika virus disease event is utilized and Brazil where the event began is considered in this research study. A three dimensional nonlinear differential equation is formulated and solved numerically utilizing the Euler's method in MS excel. It is appeared from the research study that, with health intercessions of public, the viable regenerative number can be decreased making it feasible for the event to cease to exist. It is additionally indicated numerically that the pandemic can just cease to exist when there are no new infected people in the populace.

  5. Semiclassical evaluation of quantum fidelity

    NASA Astrophysics Data System (ADS)

    Vaníček, Jiří; Heller, Eric J.

    2003-11-01

    We present a numerically feasible semiclassical (SC) method to evaluate quantum fidelity decay (Loschmidt echo) in a classically chaotic system. It was thought that such evaluation would be intractable, but instead we show that a uniform SC expression not only is tractable but it also gives remarkably accurate numerical results for the standard map in both the Fermi-golden-rule and Lyapunov regimes. Because it allows Monte Carlo evaluation, the uniform expression is accurate at times when there are 1070 semiclassical contributions. Remarkably, it also explicitly contains the “building blocks” of analytical theories of recent literature, and thus permits a direct test of the approximations made by other authors in these regimes, rather than an a posteriori comparison with numerical results. We explain in more detail the extended validity of the classical perturbation approximation and show that within this approximation, the so-called “diagonal approximation” is automatic and does not require ensemble averaging.

  6. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    PubMed

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  7. Numerical prediction of pollutant dispersion and transport in an atmospheric boundary layer

    NASA Astrophysics Data System (ADS)

    Zeoli, Stéphanie; Bricteux, Laurent; Mech. Eng. Dpt. Team

    2014-11-01

    The ability to accurately predict concentration levels of air pollutant released from point sources is required in order to determine their environmental impact. A wall modeled large-eddy simulation (WMLES) of the ABL is performed using the OpenFoam based solver SOWFA (Churchfield and Lee, NREL). It uses Boussinesq approximation for buoyancy effects and takes into account Coriolis forces. A synthetic eddy method is proposed to properly model turbulence inlet velocity boundary conditions. This method will be compared with the standard pressure gradient forcing. WMLES are usually performed using a standard Smagorinsky model or its dynamic version. It is proposed here to investigate a subgrid scale (SGS) model with a better spectral behavior. To this end, a regularized variational multiscale (RVMs) model (Jeanmart and Winckelmans, 2007) is implemented together with standard wall function in order to preserve the dynamics of the large scales within the Ekman layer. The influence of the improved SGS model on the wind simulation and scalar transport will be discussed based on turbulence diagnostics.

  8. A Priori Subgrid Scale Modeling for a Droplet Laden Temporal Mixing Layer

    NASA Technical Reports Server (NTRS)

    Okongo, Nora; Bellan, Josette

    2000-01-01

    Subgrid analysis of a transitional temporal mixing layer with evaporating droplets has been performed using a direct numerical simulation (DNS) database. The DNS is for a Reynolds number (based on initial vorticity thickness) of 600, with droplet mass loading of 0.2. The gas phase is computed using a Eulerian formulation, with Lagrangian droplet tracking. Since Large Eddy Simulation (LES) of this flow requires the computation of unfiltered gas-phase variables at droplet locations from filtered gas-phase variables at the grid points, it is proposed to model these by assuming the gas-phase variables to be given by the filtered variables plus a correction based on the filtered standard deviation, which can be computed from the sub-grid scale (SGS) standard deviation. This model predicts unfiltered variables at droplet locations better than simply interpolating the filtered variables. Three methods are investigated for modeling the SGS standard deviation: Smagorinsky, gradient and scale-similarity. When properly calibrated, the gradient and scale-similarity methods give results in excellent agreement with the DNS.

  9. Numerical determination of Paris law constants for carbon steel using a two-scale model

    NASA Astrophysics Data System (ADS)

    Mlikota, M.; Staib, S.; Schmauder, S.; Božić, Ž.

    2017-05-01

    For most engineering alloys, the long fatigue crack growth under a certain stress level can be described by the Paris law. The law provides a correlation between the fatigue crack growth rate (FCGR or da/dN), the range of stress intensity factor (ΔK), and the material constants C and m. A well-established test procedure is typically used to determine the Paris law constants C and m, considering standard specimens, notched and pre-cracked. Definition of all the details necessary to obtain feasible and comparable Paris law constants are covered by standards. However, these cost-expensive tests can be replaced by appropriate numerical calculations. In this respect, this paper deals with the numerical determination of Paris law constants for carbon steel using a two-scale model. A micro-model containing the microstructure of a material is generated using the Finite Element Method (FEM) to calculate the fatigue crack growth rate at a crack tip. The model is based on the Tanaka-Mura equation. On the other side, a macro-model serves for the calculation of the stress intensity factor. The analysis yields a relationship between the crack growth rates and the stress intensity factors for defined crack lengths which is then used to determine the Paris law constants.

  10. A Dynamic Approach to Monitoring Particle Fallout in a Cleanroom Environment

    NASA Technical Reports Server (NTRS)

    Perry, Radford L., III

    2010-01-01

    This slide presentation discusses a mathematical model to monitor particle fallout in a cleanroom. "Cleanliness levels" do not lead to increases with regards to cleanroom type or time because the levels are not linear. Activity level, impacts the cleanroom class. The numerical method presented leads to a simple Class-hour formulation, that allows for dynamic monitoring of the particle using a standard air particle counter.

  11. Guidelines for standard preclinical experiments in the mouse model of myasthenia gravis induced by acetylcholine receptor immunization.

    PubMed

    Tuzun, Erdem; Berrih-Aknin, Sonia; Brenner, Talma; Kusner, Linda L; Le Panse, Rozen; Yang, Huan; Tzartos, Socrates; Christadoss, Premkumar

    2015-08-01

    Myasthenia gravis (MG) is an autoimmune disorder characterized by generalized muscle weakness due to neuromuscular junction (NMJ) dysfunction brought by acetylcholine receptor (AChR) antibodies in most cases. Although steroids and other immunosuppressants are effectively used for treatment of MG, these medications often cause severe side effects and a complete remission cannot be obtained in many cases. For pre-clinical evaluation of more effective and less toxic treatment methods for MG, the experimental autoimmune myasthenia gravis (EAMG) induced by Torpedo AChR immunization has become one of the standard animal models. Although numerous compounds have been recently proposed for MG mostly by using the active immunization EAMG model, only a few have been proven to be effective in MG patients. The variability in the experimental design, immunization methods and outcome measurements of pre-clinical EAMG studies make it difficult to interpret the published reports and assess the potential for application to MG patients. In an effort to standardize the active immunization EAMG model, we propose standard procedures for animal care conditions, sampling and randomization of mice, experimental design and outcome measures. Utilization of these standard procedures might improve the power of pre-clinical EAMG experiments and increase the chances for identifying promising novel treatment methods that can be effectively translated into clinical trials for MG. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Non-additive non-interacting kinetic energy of rare gas dimers

    NASA Astrophysics Data System (ADS)

    Jiang, Kaili; Nafziger, Jonathan; Wasserman, Adam

    2018-03-01

    Approximations of the non-additive non-interacting kinetic energy (NAKE) as an explicit functional of the density are the basis of several electronic structure methods that provide improved computational efficiency over standard Kohn-Sham calculations. However, within most fragment-based formalisms, there is no unique exact NAKE, making it difficult to develop general, robust approximations for it. When adjustments are made to the embedding formalisms to guarantee uniqueness, approximate functionals may be more meaningfully compared to the exact unique NAKE. We use numerically accurate inversions to study the exact NAKE of several rare-gas dimers within partition density functional theory, a method that provides the uniqueness for the exact NAKE. We find that the NAKE decreases nearly exponentially with atomic separation for the rare-gas dimers. We compute the logarithmic derivative of the NAKE with respect to the bond length for our numerically accurate inversions as well as for several approximate NAKE functionals. We show that standard approximate NAKE functionals do not reproduce the correct behavior for this logarithmic derivative and propose two new NAKE functionals that do. The first of these is based on a re-parametrization of a conjoint Perdew-Burke-Ernzerhof (PBE) functional. The second is a simple, physically motivated non-decomposable NAKE functional that matches the asymptotic decay constant without fitting.

  13. An RBF-FD closest point method for solving PDEs on surfaces

    NASA Astrophysics Data System (ADS)

    Petras, A.; Ling, L.; Ruuth, S. J.

    2018-10-01

    Partial differential equations (PDEs) on surfaces appear in many applications throughout the natural and applied sciences. The classical closest point method (Ruuth and Merriman (2008) [17]) is an embedding method for solving PDEs on surfaces using standard finite difference schemes. In this paper, we formulate an explicit closest point method using finite difference schemes derived from radial basis functions (RBF-FD). Unlike the orthogonal gradients method (Piret (2012) [22]), our proposed method uses RBF centers on regular grid nodes. This formulation not only reduces the computational cost but also avoids the ill-conditioning from point clustering on the surface and is more natural to couple with a grid based manifold evolution algorithm (Leung and Zhao (2009) [26]). When compared to the standard finite difference discretization of the closest point method, the proposed method requires a smaller computational domain surrounding the surface, resulting in a decrease in the number of sampling points on the surface. In addition, higher-order schemes can easily be constructed by increasing the number of points in the RBF-FD stencil. Applications to a variety of examples are provided to illustrate the numerical convergence of the method.

  14. Measurement properties of gingival biotype evaluation methods.

    PubMed

    Alves, Patrick Henry Machado; Alves, Thereza Cristina Lira Pacheco; Pegoraro, Thiago Amadei; Costa, Yuri Martins; Bonfante, Estevam Augusto; de Almeida, Ana Lúcia Pompéia Fraga

    2018-06-01

    There are numerous methods to measure the dimensions of the gingival tissue, but few have compared the effectiveness of one method over another. This study aimed to describe a new method and to estimate the validity of gingival biotype assessment with the aid of computed tomography scanning (CTS). In each patient different methods of evaluation of the gingival thickness were used: transparency of periodontal probe, transgingival, photography, and a new method of CTS). Intrarater and interrater reliability considering the categorical classification of the gingival biotype were estimated with Cohen's kappa coefficient, intraclass correlation coefficient (ICC), and ANOVA (P < .05). The criterion validity of the CTS was determined using the transgingival method as the reference standard. Sensitivity and specificity values were computed along with theirs 95% CI. Twelve patients were subjected to assessment of their gingival thickness. The highest agreement was found between transgingival and CTS (86.1%). The comparison between the categorical classifications of CTS and the transgingival method (reference standard) showed high specificity (94.92%) and low sensitivity (53.85%) for definition of a thin biotype. The new method of CTS assessment to classify gingival tissue thickness can be considered reliable and clinically useful to diagnose thick biotype. © 2018 Wiley Periodicals, Inc.

  15. Enthalpy-based multiple-relaxation-time lattice Boltzmann method for solid-liquid phase-change heat transfer in metal foams.

    PubMed

    Liu, Qing; He, Ya-Ling; Li, Qing

    2017-08-01

    In this paper, an enthalpy-based multiple-relaxation-time (MRT) lattice Boltzmann (LB) method is developed for solid-liquid phase-change heat transfer in metal foams under the local thermal nonequilibrium (LTNE) condition. The enthalpy-based MRT-LB method consists of three different MRT-LB models: one for flow field based on the generalized non-Darcy model, and the other two for phase-change material (PCM) and metal-foam temperature fields described by the LTNE model. The moving solid-liquid phase interface is implicitly tracked through the liquid fraction, which is simultaneously obtained when the energy equations of PCM and metal foam are solved. The present method has several distinctive features. First, as compared with previous studies, the present method avoids the iteration procedure; thus it retains the inherent merits of the standard LB method and is superior to the iteration method in terms of accuracy and computational efficiency. Second, a volumetric LB scheme instead of the bounce-back scheme is employed to realize the no-slip velocity condition in the interface and solid phase regions, which is consistent with the actual situation. Last but not least, the MRT collision model is employed, and with additional degrees of freedom, it has the ability to reduce the numerical diffusion across the phase interface induced by solid-liquid phase change. Numerical tests demonstrate that the present method can serve as an accurate and efficient numerical tool for studying metal-foam enhanced solid-liquid phase-change heat transfer in latent heat storage. Finally, comparisons and discussions are made to offer useful information for practical applications of the present method.

  16. Enthalpy-based multiple-relaxation-time lattice Boltzmann method for solid-liquid phase-change heat transfer in metal foams

    NASA Astrophysics Data System (ADS)

    Liu, Qing; He, Ya-Ling; Li, Qing

    2017-08-01

    In this paper, an enthalpy-based multiple-relaxation-time (MRT) lattice Boltzmann (LB) method is developed for solid-liquid phase-change heat transfer in metal foams under the local thermal nonequilibrium (LTNE) condition. The enthalpy-based MRT-LB method consists of three different MRT-LB models: one for flow field based on the generalized non-Darcy model, and the other two for phase-change material (PCM) and metal-foam temperature fields described by the LTNE model. The moving solid-liquid phase interface is implicitly tracked through the liquid fraction, which is simultaneously obtained when the energy equations of PCM and metal foam are solved. The present method has several distinctive features. First, as compared with previous studies, the present method avoids the iteration procedure; thus it retains the inherent merits of the standard LB method and is superior to the iteration method in terms of accuracy and computational efficiency. Second, a volumetric LB scheme instead of the bounce-back scheme is employed to realize the no-slip velocity condition in the interface and solid phase regions, which is consistent with the actual situation. Last but not least, the MRT collision model is employed, and with additional degrees of freedom, it has the ability to reduce the numerical diffusion across the phase interface induced by solid-liquid phase change. Numerical tests demonstrate that the present method can serve as an accurate and efficient numerical tool for studying metal-foam enhanced solid-liquid phase-change heat transfer in latent heat storage. Finally, comparisons and discussions are made to offer useful information for practical applications of the present method.

  17. Uncertainty Evaluation of the New Setup for Measurement of Water-Vapor Permeation Rate by a Dew-Point Sensor

    NASA Astrophysics Data System (ADS)

    Hudoklin, D.; Šetina, J.; Drnovšek, J.

    2012-09-01

    The measurement of the water-vapor permeation rate (WVPR) through materials is very important in many industrial applications such as the development of new fabrics and construction materials, in the semiconductor industry, packaging, vacuum techniques, etc. The demand for this kind of measurement grows considerably and thus many different methods for measuring the WVPR are developed and standardized within numerous national and international standards. However, comparison of existing methods shows a low level of mutual agreement. The objective of this paper is to demonstrate the necessary uncertainty evaluation for WVPR measurements, so as to provide a basis for development of a corresponding reference measurement standard. This paper presents a specially developed measurement setup, which employs a precision dew-point sensor for WVPR measurements on specimens of different shapes. The paper also presents a physical model, which tries to account for both dynamic and quasi-static methods, the common types of WVPR measurements referred to in standards and scientific publications. An uncertainty evaluation carried out according to the ISO/IEC guide to the expression of uncertainty in measurement (GUM) shows the relative expanded ( k = 2) uncertainty to be 3.0 % for WVPR of 6.71 mg . h-1 (corresponding to permeance of 30.4 mg . m-2. day-1 . hPa-1).

  18. Variation in predicting pantograph-catenary interaction contact forces, numerical simulations and field measurements

    NASA Astrophysics Data System (ADS)

    Nåvik, Petter; Rønnquist, Anders; Stichel, Sebastian

    2017-09-01

    The contact force between the pantograph and the contact wire ensures energy transfer between the two. Too small of a force leads to arching and unstable energy transfer, while too large of a force leads to unnecessary wear on both parts. Thus, obtaining the correct contact force is important for both field measurements and estimates using numerical analysis. The field contact force time series is derived from measurements performed by a self-propelled diagnostic vehicle containing overhead line recording equipment. The measurements are not sampled at the actual contact surface of the interaction but by force transducers beneath the collector strips. Methods exist for obtaining more realistic measurements by adding inertia and aerodynamic effects to the measurements. The variation in predicting the pantograph-catenary interaction contact force is studied in this paper by evaluating the effect of the force sampling location and the effects of signal processing such as filtering. A numerical model validated by field measurements is used to study these effects. First, this paper shows that the numerical model can reproduce a train passage with high accuracy. Second, this study introduces three different options for contact force predictions from numerical simulations. Third, this paper demonstrates that the standard deviation and the maximum and minimum values of the contact force are sensitive to a low-pass filter. For a specific case, an 80 Hz cut-off frequency is compared to a 20 Hz cut-off frequency, as required by EN 50317:2012; the results show an 11% increase in standard deviation, a 36% increase in the maximum value and a 19% decrease in the minimum value.

  19. Simple method for the generation of multiple homogeneous field volumes inside the bore of superconducting magnets.

    PubMed

    Chou, Ching-Yu; Ferrage, Fabien; Aubert, Guy; Sakellariou, Dimitris

    2015-07-17

    Standard Magnetic Resonance magnets produce a single homogeneous field volume, where the analysis is performed. Nonetheless, several modern applications could benefit from the generation of multiple homogeneous field volumes along the axis and inside the bore of the magnet. In this communication, we propose a straightforward method using a combination of ring structures of permanent magnets in order to cancel the gradient of the stray field in a series of distinct volumes. These concepts were demonstrated numerically on an experimentally measured magnetic field profile. We discuss advantages and limitations of our method and present the key steps required for an experimental validation.

  20. Dynamic Monitoring of Cleanroom Fallout Using an Air Particle Counter

    NASA Technical Reports Server (NTRS)

    Perry, Radford

    2011-01-01

    The particle fallout limitations and periodic allocations for the James Webb Space Telescope are very stringent. Standard prediction methods are complicated by non-linearity and monitoring methods that are insufficiently responsive. A method for dynamically predicting the particle fallout in a cleanroom using air particle counter data was determined by numerical correlation. This method provides a simple linear correlation to both time and air quality, which can be monitored in real time. The summation of effects provides the program better understanding of the cleanliness and assists in the planning of future activities. Definition of fallout rates within a cleanroom during assembly and integration of contamination-sensitive hardware, such as the James Webb Space Telescope, is essential for budgeting purposes. Balancing the activity levels for assembly and test with the particle accumulation rate is paramount. The current approach to predicting particle fallout in a cleanroom assumes a constant air quality based on the rated class of a cleanroom, with adjustments for projected work or exposure times. Actual cleanroom class can also depend on the number of personnel present and the type of activities. A linear correlation of air quality and normalized particle fallout was determined numerically. An air particle counter (standard cleanroom equipment) can be used to monitor the air quality on a real-time basis and determine the "class" of the cleanroom (per FED-STD-209 or ISO-14644). The correlation function provides an area coverage coefficient per class-hour of exposure. The prediction of particle accumulations provides scheduling inputs for activity levels and cleanroom class requirements.

  1. A partially penalty immersed Crouzeix-Raviart finite element method for interface problems.

    PubMed

    An, Na; Yu, Xijun; Chen, Huanzhen; Huang, Chaobao; Liu, Zhongyan

    2017-01-01

    The elliptic equations with discontinuous coefficients are often used to describe the problems of the multiple materials or fluids with different densities or conductivities or diffusivities. In this paper we develop a partially penalty immersed finite element (PIFE) method on triangular grids for anisotropic flow models, in which the diffusion coefficient is a piecewise definite-positive matrix. The standard linear Crouzeix-Raviart type finite element space is used on non-interface elements and the piecewise linear Crouzeix-Raviart type immersed finite element (IFE) space is constructed on interface elements. The piecewise linear functions satisfying the interface jump conditions are uniquely determined by the integral averages on the edges as degrees of freedom. The PIFE scheme is given based on the symmetric, nonsymmetric or incomplete interior penalty discontinuous Galerkin formulation. The solvability of the method is proved and the optimal error estimates in the energy norm are obtained. Numerical experiments are presented to confirm our theoretical analysis and show that the newly developed PIFE method has optimal-order convergence in the [Formula: see text] norm as well. In addition, numerical examples also indicate that this method is valid for both the isotropic and the anisotropic elliptic interface problems.

  2. Highly efficient and exact method for parallelization of grid-based algorithms and its implementation in DelPhi

    PubMed Central

    Li, Chuan; Li, Lin; Zhang, Jie; Alexov, Emil

    2012-01-01

    The Gauss-Seidel method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the Gauss-Seidel method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here we report an efficient and exact (not requiring assumptions) method to parallelize iterations and to reduce the computational time as a linear/nearly linear function of the number of CPUs. In contrast to other existing solutions, our method does not require any assumptions and is equally applicable for solving linear and nonlinear equations. This approach is implemented in the DelPhi program, which is a finite difference Poisson-Boltzmann equation solver to model electrostatics in molecular biology. This development makes the iterative procedure on obtaining the electrostatic potential distribution in the parallelized DelPhi several folds faster than that in the serial code. Further we demonstrate the advantages of the new parallelized DelPhi by computing the electrostatic potential and the corresponding energies of large supramolecular structures. PMID:22674480

  3. High-Order Implicit-Explicit Multi-Block Time-stepping Method for Hyperbolic PDEs

    NASA Technical Reports Server (NTRS)

    Nielsen, Tanner B.; Carpenter, Mark H.; Fisher, Travis C.; Frankel, Steven H.

    2014-01-01

    This work seeks to explore and improve the current time-stepping schemes used in computational fluid dynamics (CFD) in order to reduce overall computational time. A high-order scheme has been developed using a combination of implicit and explicit (IMEX) time-stepping Runge-Kutta (RK) schemes which increases numerical stability with respect to the time step size, resulting in decreased computational time. The IMEX scheme alone does not yield the desired increase in numerical stability, but when used in conjunction with an overlapping partitioned (multi-block) domain significant increase in stability is observed. To show this, the Overlapping-Partition IMEX (OP IMEX) scheme is applied to both one-dimensional (1D) and two-dimensional (2D) problems, the nonlinear viscous Burger's equation and 2D advection equation, respectively. The method uses two different summation by parts (SBP) derivative approximations, second-order and fourth-order accurate. The Dirichlet boundary conditions are imposed using the Simultaneous Approximation Term (SAT) penalty method. The 6-stage additive Runge-Kutta IMEX time integration schemes are fourth-order accurate in time. An increase in numerical stability 65 times greater than the fully explicit scheme is demonstrated to be achievable with the OP IMEX method applied to 1D Burger's equation. Results from the 2D, purely convective, advection equation show stability increases on the order of 10 times the explicit scheme using the OP IMEX method. Also, the domain partitioning method in this work shows potential for breaking the computational domain into manageable sizes such that implicit solutions for full three-dimensional CFD simulations can be computed using direct solving methods rather than the standard iterative methods currently used.

  4. Adjusting for partial verification or workup bias in meta-analyses of diagnostic accuracy studies.

    PubMed

    de Groot, Joris A H; Dendukuri, Nandini; Janssen, Kristel J M; Reitsma, Johannes B; Brophy, James; Joseph, Lawrence; Bossuyt, Patrick M M; Moons, Karel G M

    2012-04-15

    A key requirement in the design of diagnostic accuracy studies is that all study participants receive both the test under evaluation and the reference standard test. For a variety of practical and ethical reasons, sometimes only a proportion of patients receive the reference standard, which can bias the accuracy estimates. Numerous methods have been described for correcting this partial verification bias or workup bias in individual studies. In this article, the authors describe a Bayesian method for obtaining adjusted results from a diagnostic meta-analysis when partial verification or workup bias is present in a subset of the primary studies. The method corrects for verification bias without having to exclude primary studies with verification bias, thus preserving the main advantages of a meta-analysis: increased precision and better generalizability. The results of this method are compared with the existing methods for dealing with verification bias in diagnostic meta-analyses. For illustration, the authors use empirical data from a systematic review of studies of the accuracy of the immunohistochemistry test for diagnosis of human epidermal growth factor receptor 2 status in breast cancer patients.

  5. TSOS and TSOS-FK hybrid methods for modelling the propagation of seismic waves

    NASA Astrophysics Data System (ADS)

    Ma, Jian; Yang, Dinghui; Tong, Ping; Ma, Xiao

    2018-05-01

    We develop a new time-space optimized symplectic (TSOS) method for numerically solving elastic wave equations in heterogeneous isotropic media. We use the phase-preserving symplectic partitioned Runge-Kutta method to evaluate the time derivatives and optimized explicit finite-difference (FD) schemes to discretize the space derivatives. We introduce the averaged medium scheme into the TSOS method to further increase its capability of dealing with heterogeneous media and match the boundary-modified scheme for implementing free-surface boundary conditions and the auxiliary differential equation complex frequency-shifted perfectly matched layer (ADE CFS-PML) non-reflecting boundaries with the TSOS method. A comparison of the TSOS method with analytical solutions and standard FD schemes indicates that the waveform generated by the TSOS method is more similar to the analytic solution and has a smaller error than other FD methods, which illustrates the efficiency and accuracy of the TSOS method. Subsequently, we focus on the calculation of synthetic seismograms for teleseismic P- or S-waves entering and propagating in the local heterogeneous region of interest. To improve the computational efficiency, we successfully combine the TSOS method with the frequency-wavenumber (FK) method and apply the ADE CFS-PML to absorb the scattered waves caused by the regional heterogeneity. The TSOS-FK hybrid method is benchmarked against semi-analytical solutions provided by the FK method for a 1-D layered model. Several numerical experiments, including a vertical cross-section of the Chinese capital area crustal model, illustrate that the TSOS-FK hybrid method works well for modelling waves propagating in complex heterogeneous media and remains stable for long-time computation. These numerical examples also show that the TSOS-FK method can tackle the converted and scattered waves of the teleseismic plane waves caused by local heterogeneity. Thus, the TSOS and TSOS-FK methods proposed in this study present an essential tool for the joint inversion of local, regional, and teleseismic waveform data.

  6. RE-NUMERATE: A Workshop to Restore Essential Numerical Skills and Thinking via Astronomy Education

    NASA Astrophysics Data System (ADS)

    McCarthy, D.; Follette, K.

    2013-04-01

    The quality of science teaching for all ages is degraded by our students' gross lack of skills in elementary arithmetic and their unwillingness to think, and to express themselves, numerically. Out of frustration educators, and science communicators, often choose to avoid these problems, thereby reinforcing the belief that math is only needed in “math class” and preventing students from maturing into capable, well informed citizens. In this sense we teach students a pseudo science, not its real nature, beauty, and value. This workshop encourages and equips educators to immerse students in numerical thinking throughout a science course. The workshop begins by identifying common deficiencies in skills and attitudes among non-science collegians (freshman-senior) enrolled in General Education astronomy courses. The bulk of the workshop engages participants in well-tested techniques (e.g., presentation methods, curriculum, activities, mentoring approaches, etc.) for improving students' arithmetic skills, increasing their confidence, and improving their abilities in numerical expression. These techniques are grounded in 25+ years of experience in college classrooms and pre-college informal education. They are suited for use in classrooms (K-12 and college), informal venues, and science communication in general and could be applied across the standard school curriculum.

  7. Hybrid Semiclassical Theory of Quantum Quenches in One-Dimensional Systems

    NASA Astrophysics Data System (ADS)

    Moca, Cǎtǎlin Paşcu; Kormos, Márton; Zaránd, Gergely

    2017-09-01

    We develop a hybrid semiclassical method to study the time evolution of one-dimensional quantum systems in and out of equilibrium. Our method handles internal degrees of freedom completely quantum mechanically by a modified time-evolving block decimation method while treating orbital quasiparticle motion classically. We can follow dynamics up to time scales well beyond the reach of standard numerical methods to observe the crossover between preequilibrated and locally phase equilibrated states. As an application, we investigate the quench dynamics and phase fluctuations of a pair of tunnel-coupled one-dimensional Bose condensates. We demonstrate the emergence of soliton-collision-induced phase propagation, soliton-entropy production, and multistep thermalization. Our method can be applied to a wide range of gapped one-dimensional systems.

  8. Current Methods to Define Metabolic Tumor Volume in Positron Emission Tomography: Which One is Better?

    PubMed

    Im, Hyung-Jun; Bradshaw, Tyler; Solaiyappan, Meiyappan; Cho, Steve Y

    2018-02-01

    Numerous methods to segment tumors using 18 F-fluorodeoxyglucose positron emission tomography (FDG PET) have been introduced. Metabolic tumor volume (MTV) refers to the metabolically active volume of the tumor segmented using FDG PET, and has been shown to be useful in predicting patient outcome and in assessing treatment response. Also, tumor segmentation using FDG PET has useful applications in radiotherapy treatment planning. Despite extensive research on MTV showing promising results, MTV is not used in standard clinical practice yet, mainly because there is no consensus on the optimal method to segment tumors in FDG PET images. In this review, we discuss currently available methods to measure MTV using FDG PET, and assess the advantages and disadvantages of the methods.

  9. 40 CFR 131.38 - Establishment of Numeric Criteria for priority toxic pollutants for the State of California.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY STANDARDS Federally Promulgated Water Quality Standards § 131.38 Establishment of Numeric Criteria for priority toxic pollutants for the State... Concentration (CMC) equals the highest concentration of a pollutant to which aquatic life can be exposed for a...

  10. 40 CFR 131.38 - Establishment of Numeric Criteria for priority toxic pollutants for the State of California.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY STANDARDS Federally Promulgated Water Quality Standards § 131.38 Establishment of Numeric Criteria for priority toxic pollutants for the State... Concentration (CMC) equals the highest concentration of a pollutant to which aquatic life can be exposed for a...

  11. 40 CFR 131.38 - Establishment of Numeric Criteria for priority toxic pollutants for the State of California.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY STANDARDS Federally Promulgated Water Quality Standards § 131.38 Establishment of Numeric Criteria for priority toxic pollutants for the State... Concentration (CMC) equals the highest concentration of a pollutant to which aquatic life can be exposed for a...

  12. 40 CFR 131.38 - Establishment of Numeric Criteria for priority toxic pollutants for the State of California.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY STANDARDS Federally Promulgated Water Quality Standards § 131.38 Establishment of Numeric Criteria for priority toxic pollutants for the State... Concentration (CMC) equals the highest concentration of a pollutant to which aquatic life can be exposed for a...

  13. 40 CFR 131.38 - Establishment of Numeric Criteria for priority toxic pollutants for the State of California.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY STANDARDS Federally Promulgated Water Quality Standards § 131.38 Establishment of Numeric Criteria for priority toxic pollutants for the State... Concentration (CMC) equals the highest concentration of a pollutant to which aquatic life can be exposed for a...

  14. Inverse scattering theory: Inverse scattering series method for one dimensional non-compact support potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Jie, E-mail: yjie2@uh.edu; Lesage, Anne-Cécile; Hussain, Fazle

    2014-12-15

    The reversion of the Born-Neumann series of the Lippmann-Schwinger equation is one of the standard ways to solve the inverse acoustic scattering problem. One limitation of the current inversion methods based on the reversion of the Born-Neumann series is that the velocity potential should have compact support. However, this assumption cannot be satisfied in certain cases, especially in seismic inversion. Based on the idea of distorted wave scattering, we explore an inverse scattering method for velocity potentials without compact support. The strategy is to decompose the actual medium as a known single interface reference medium, which has the same asymptoticmore » form as the actual medium and a perturbative scattering potential with compact support. After introducing the method to calculate the Green’s function for the known reference potential, the inverse scattering series and Volterra inverse scattering series are derived for the perturbative potential. Analytical and numerical examples demonstrate the feasibility and effectiveness of this method. Besides, to ensure stability of the numerical computation, the Lanczos averaging method is employed as a filter to reduce the Gibbs oscillations for the truncated discrete inverse Fourier transform of each order. Our method provides a rigorous mathematical framework for inverse acoustic scattering with a non-compact support velocity potential.« less

  15. Comparison of US EPA and European emission standards for combustion and incineration technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Licata, A.; Hartenstein, H.U.; Terracciano, L.

    1997-12-01

    There has been considerable debate, misunderstanding, and controversy when comparing emission standards used in Europe and the United States. One of the first questions you hear whenever U.S. EPA publishes a new emission standard is, {open_quotes}Is it as restrictive or is it the same as the German standard{close_quotes}. Although both systems of regulation call for the use of CEMS for compliance, there are substantial differences in how emission standards are structured in Europe and in the U.S. They include reference points, averaging times, sampling methods, and technology. Generally, the European standards tend to be more restrictive, due in part tomore » the fact that the facilities are of necessity sited in close proximity to residential areas. In Germany, for example, regulations in general are comprehensive and include both design standards and emission limits while U.S. EPA`s rules are source specific and, in most cases, limited to numerical emission standards. In some cases, comparisons can be made between emission standards and, in some cases, comparisons can only be made with restrictive caveats. The paper will present a comprehensive overview of the emission standards and how they are applied.« less

  16. Comparative Study on High-Order Positivity-preserving WENO Schemes

    NASA Technical Reports Server (NTRS)

    Kotov, D. V.; Yee, H. C.; Sjogreen, B.

    2013-01-01

    In gas dynamics and magnetohydrodynamics flows, physically, the density and the pressure p should both be positive. In a standard conservative numerical scheme, however, the computed internal energy is obtained by subtracting the kinetic energy from the total energy, resulting in a computed p that may be negative. Examples are problems in which the dominant energy is kinetic. Negative may often emerge in computing blast waves. In such situations the computed eigenvalues of the Jacobian will become imaginary. Consequently, the initial value problem for the linearized system will be ill posed. This explains why failure of preserving positivity of density or pressure may cause blow-ups of the numerical algorithm. The adhoc methods in numerical strategy which modify the computed negative density and/or the computed negative pressure to be positive are neither a conservative cure nor a stable solution. Conservative positivity-preserving schemes are more appropriate for such flow problems. The ideas of Zhang & Shu (2012) and Hu et al. (2012) precisely address the aforementioned issue. Zhang & Shu constructed a new conservative positivity-preserving procedure to preserve positive density and pressure for high-order WENO schemes by the Lax-Friedrichs flux (WENO/LLF). In general, WENO/LLF is too dissipative for flows such as turbulence with strong shocks computed in direct numerical simulations (DNS) and large eddy simulations (LES). The new conservative positivity-preserving procedure proposed in Hu et al. (2012) can be used with any high-order shock-capturing scheme, including high-order WENO schemes using the Roe's flux (WENO/Roe). The goal of this study is to compare the results obtained by non-positivity-preserving methods with the recently developed positivity-preserving schemes for representative test cases. In particular the more difficult 3D Noh and Sedov problems are considered. These test cases are chosen because of the negative pressure/density most often exhibited by standard high-order shock-capturing schemes. The simulation of a hypersonic nonequilibrium viscous shock tube that is related to the NASA Electric Arc Shock Tube (EAST) is also included. EAST is a high-temperature and high Mach number viscous nonequilibrium flow consisting of 13 species. In addition, as most common shock-capturing schemes have been developed for problems without source terms, when applied to problems with nonlinear and/or sti source terms these methods can result in spurious solutions, even when solving a conservative system of equations with a conservative scheme. This kind of behavior can be observed even for a scalar case (LeVeque & Yee 1990) as well as for the case consisting of two species and one reaction (Wang et al. 2012). For further information concerning this issue see (LeVeque & Yee 1990; Griffiths et al. 1992; Lafon & Yee 1996; Yee et al. 2012). This EAST example indicated that standard high-order shock-capturing methods exhibit instability of density/pressure in addition to grid-dependent discontinuity locations with insufficient grid points. The evaluation of these test cases is based on the stability of the numerical schemes together with the accuracy of the obtained solutions.

  17. Integrated Experimental and Numerical Research on the Aerodynamics of Unsteady Moving Aircraft

    DTIC Science & Technology

    2007-06-01

    blended wing body configuration were tested in different modes of oscillatory motions roll, pitch and yaw as well as delta wing geometries like X-31...airplane configurations (e.g. wide body, green aircraft, blended wing body) the approach up to now using semi-empirical methods as standard...cross section wing. In order to evaluate the influence of individual components of the tested airplane configuration, such as winglets , vertical or

  18. A nonlinear dynamic finite element approach for simulating muscular hydrostats.

    PubMed

    Vavourakis, V; Kazakidi, A; Tsakiris, D P; Ekaterinaris, J A

    2014-01-01

    An implicit nonlinear finite element model for simulating biological muscle mechanics is developed. The numerical method is suitable for dynamic simulations of three-dimensional, nonlinear, nearly incompressible, hyperelastic materials that undergo large deformations. These features characterise biological muscles, which consist of fibres and connective tissues. It can be assumed that the stress distribution inside the muscles is the superposition of stresses along the fibres and the connective tissues. The mechanical behaviour of the surrounding tissues is determined by adopting a Mooney-Rivlin constitutive model, while the mechanical description of fibres is considered to be the sum of active and passive stresses. Due to the nonlinear nature of the problem, evaluation of the Jacobian matrix is carried out in order to subsequently utilise the standard Newton-Raphson iterative procedure and to carry out time integration with an implicit scheme. The proposed methodology is implemented into our in-house, open source, finite element software, which is validated by comparing numerical results with experimental measurements and other numerical results. Finally, the numerical procedure is utilised to simulate primitive octopus arm manoeuvres, such as bending and reaching.

  19. Spectral combination of spherical gravitational curvature boundary-value problems

    NASA Astrophysics Data System (ADS)

    PitoÅák, Martin; Eshagh, Mehdi; Šprlák, Michal; Tenzer, Robert; Novák, Pavel

    2018-04-01

    Four solutions of the spherical gravitational curvature boundary-value problems can be exploited for the determination of the Earth's gravitational potential. In this article we discuss the combination of simulated satellite gravitational curvatures, i.e., components of the third-order gravitational tensor, by merging these solutions using the spectral combination method. For this purpose, integral estimators of biased- and unbiased-types are derived. In numerical studies, we investigate the performance of the developed mathematical models for the gravitational field modelling in the area of Central Europe based on simulated satellite measurements. Firstly, we verify the correctness of the integral estimators for the spectral downward continuation by a closed-loop test. Estimated errors of the combined solution are about eight orders smaller than those from the individual solutions. Secondly, we perform a numerical experiment by considering the Gaussian noise with the standard deviation of 6.5× 10-17 m-1s-2 in the input data at the satellite altitude of 250 km above the mean Earth sphere. This value of standard deviation is equivalent to a signal-to-noise ratio of 10. Superior results with respect to the global geopotential model TIM-r5 are obtained by the spectral downward continuation of the vertical-vertical-vertical component with the standard deviation of 2.104 m2s-2, but the root mean square error is the largest and reaches 9.734 m2s-2. Using the spectral combination of all gravitational curvatures the root mean square error is more than 400 times smaller but the standard deviation reaches 17.234 m2s-2. The combination of more components decreases the root mean square error of the corresponding solutions while the standard deviations of the combined solutions do not improve as compared to the solution from the vertical-vertical-vertical component. The presented method represents a weight mean in the spectral domain that minimizes the root mean square error of the combined solutions and improves standard deviation of the solution based only on the least accurate components.

  20. Group iterative methods for the solution of two-dimensional time-fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Balasim, Alla Tareq; Ali, Norhashidah Hj. Mohd.

    2016-06-01

    Variety of problems in science and engineering may be described by fractional partial differential equations (FPDE) in relation to space and/or time fractional derivatives. The difference between time fractional diffusion equations and standard diffusion equations lies primarily in the time derivative. Over the last few years, iterative schemes derived from the rotated finite difference approximation have been proven to work well in solving standard diffusion equations. However, its application on time fractional diffusion counterpart is still yet to be investigated. In this paper, we will present a preliminary study on the formulation and analysis of new explicit group iterative methods in solving a two-dimensional time fractional diffusion equation. These methods were derived from the standard and rotated Crank-Nicolson difference approximation formula. Several numerical experiments were conducted to show the efficiency of the developed schemes in terms of CPU time and iteration number. At the request of all authors of the paper an updated version of this article was published on 7 July 2016. The original version supplied to AIP Publishing contained an error in Table 1 and References 15 and 16 were incomplete. These errors have been corrected in the updated and republished article.

  1. Applying ISO 11929:2010 Standard to detection limit calculation in least-squares based multi-nuclide gamma-ray spectrum evaluation

    NASA Astrophysics Data System (ADS)

    Kanisch, G.

    2017-05-01

    The concepts of ISO 11929 (2010) are applied to evaluation of radionuclide activities from more complex multi-nuclide gamma-ray spectra. From net peak areas estimated by peak fitting, activities and their standard uncertainties are calculated by weighted linear least-squares method with an additional step, where uncertainties of the design matrix elements are taken into account. A numerical treatment of the standard's uncertainty function, based on ISO 11929 Annex C.5, leads to a procedure for deriving decision threshold and detection limit values. The methods shown allow resolving interferences between radionuclide activities also in case of calculating detection limits where they can improve the latter by including more than one gamma line per radionuclide. The co"mmon single nuclide weighted mean is extended to an interference-corrected (generalized) weighted mean, which, combined with the least-squares method, allows faster detection limit calculations. In addition, a new grouped uncertainty budget was inferred, which for each radionuclide gives uncertainty budgets from seven main variables, such as net count rates, peak efficiencies, gamma emission intensities and others; grouping refers to summation over lists of peaks per radionuclide.

  2. A highly relevant and efficient single step method for simultaneous depletion and isolation of human regulatory T cells in a clinical setting.

    PubMed

    Brezar, Vedran; Ruffin, Nicolas; Lévy, Yves; Seddiki, Nabila

    2014-09-01

    Regulatory T cells (Tregs) are pivotal in preventing autoimmunity. They play a major but still ambiguous role in cancer and viral infections. Functional studies of human Tregs are often hampered by numerous technical difficulties arising from imperfections in isolating and depleting protocols, together with the usual low cell number available from clinical samples. We standardized a simple procedure (Single Step Method, SSM), based on magnetic beads technology, in which both depletion and isolation of human Tregs with high purities are simultaneously achieved. SSM is suitable when using low cell numbers either fresh or frozen from both patients and healthy individuals. It allows simultaneous Tregs isolation and depletion that can be used for further functional work to monitor suppressive function of isolated Tregs (in vitro suppression assay) and also effector IFN-γ responses of Tregs-depleted cell fraction (OX40 assay). To our knowledge, there is no accurate standardized method for Tregs isolation and depletion in a clinical context. SSM could thus be used and easily standardized across different laboratories. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. An effective hybrid firefly algorithm with harmony search for global numerical optimization.

    PubMed

    Guo, Lihong; Wang, Gai-Ge; Wang, Heqi; Wang, Dinan

    2013-01-01

    A hybrid metaheuristic approach by hybridizing harmony search (HS) and firefly algorithm (FA), namely, HS/FA, is proposed to solve function optimization. In HS/FA, the exploration of HS and the exploitation of FA are fully exerted, so HS/FA has a faster convergence speed than HS and FA. Also, top fireflies scheme is introduced to reduce running time, and HS is utilized to mutate between fireflies when updating fireflies. The HS/FA method is verified by various benchmarks. From the experiments, the implementation of HS/FA is better than the standard FA and other eight optimization methods.

  4. Combining kernel matrix optimization and regularization to improve particle size distribution retrieval

    NASA Astrophysics Data System (ADS)

    Ma, Qian; Xia, Houping; Xu, Qiang; Zhao, Lei

    2018-05-01

    A new method combining Tikhonov regularization and kernel matrix optimization by multi-wavelength incidence is proposed for retrieving particle size distribution (PSD) in an independent model with improved accuracy and stability. In comparison to individual regularization or multi-wavelength least squares, the proposed method exhibited better anti-noise capability, higher accuracy and stability. While standard regularization typically makes use of the unit matrix, it is not universal for different PSDs, particularly for Junge distributions. Thus, a suitable regularization matrix was chosen by numerical simulation, with the second-order differential matrix found to be appropriate for most PSD types.

  5. Time multiplexing based extended depth of focus imaging.

    PubMed

    Ilovitsh, Asaf; Zalevsky, Zeev

    2016-01-01

    We propose to utilize the time multiplexing super resolution method to extend the depth of focus of an imaging system. In standard time multiplexing, the super resolution is achieved by generating duplication of the optical transfer function in the spectrum domain, by the use of moving gratings. While this improves the spatial resolution, it does not increase the depth of focus. By changing the gratings frequency and, by that changing the duplication positions, it is possible to obtain an extended depth of focus. The proposed method is presented analytically, demonstrated via numerical simulations and validated by a laboratory experiment.

  6. Adaptive Low Dissipative High Order Filter Methods for Multiscale MHD Flows

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, Bjoern

    2004-01-01

    Adaptive low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous MHD flows has been constructed. Several variants of the filter approach that cater to different flow types are proposed. These filters provide a natural and efficient way for the minimization of the divergence of the magnetic field [divergence of B] numerical error in the sense that no standard divergence cleaning is required. For certain 2-D MHD test problems, divergence free preservation of the magnetic fields of these filter schemes has been achieved.

  7. Application of the Finite Element Method to Rotary Wing Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Straub, F. K.; Friedmann, P. P.

    1982-01-01

    A finite element method for the spatial discretization of the dynamic equations of equilibrium governing rotary-wing aeroelastic problems is presented. Formulation of the finite element equations is based on weighted Galerkin residuals. This Galerkin finite element method reduces algebraic manipulative labor significantly, when compared to the application of the global Galerkin method in similar problems. The coupled flap-lag aeroelastic stability boundaries of hingeless helicopter rotor blades in hover are calculated. The linearized dynamic equations are reduced to the standard eigenvalue problem from which the aeroelastic stability boundaries are obtained. The convergence properties of the Galerkin finite element method are studied numerically by refining the discretization process. Results indicate that four or five elements suffice to capture the dynamics of the blade with the same accuracy as the global Galerkin method.

  8. Computationally efficient simulation of unsteady aerodynamics using POD on the fly

    NASA Astrophysics Data System (ADS)

    Moreno-Ramos, Ruben; Vega, José M.; Varas, Fernando

    2016-12-01

    Modern industrial aircraft design requires a large amount of sufficiently accurate aerodynamic and aeroelastic simulations. Current computational fluid dynamics (CFD) solvers with aeroelastic capabilities, such as the NASA URANS unstructured solver FUN3D, require very large computational resources. Since a very large amount of simulation is necessary, the CFD cost is just unaffordable in an industrial production environment and must be significantly reduced. Thus, a more inexpensive, yet sufficiently precise solver is strongly needed. An opportunity to approach this goal could follow some recent results (Terragni and Vega 2014 SIAM J. Appl. Dyn. Syst. 13 330-65 Rapun et al 2015 Int. J. Numer. Meth. Eng. 104 844-68) on an adaptive reduced order model that combines ‘on the fly’ a standard numerical solver (to compute some representative snapshots), proper orthogonal decomposition (POD) (to extract modes from the snapshots), Galerkin projection (onto the set of POD modes), and several additional ingredients such as projecting the equations using a limited amount of points and fairly generic mode libraries. When applied to the complex Ginzburg-Landau equation, the method produces acceleration factors (comparing with standard numerical solvers) of the order of 20 and 300 in one and two space dimensions, respectively. Unfortunately, the extension of the method to unsteady, compressible flows around deformable geometries requires new approaches to deal with deformable meshes, high-Reynolds numbers, and compressibility. A first step in this direction is presented considering the unsteady compressible, two-dimensional flow around an oscillating airfoil using a CFD solver in a rigidly moving mesh. POD on the Fly gives results whose accuracy is comparable to that of the CFD solver used to compute the snapshots.

  9. Setting Emissions Standards Based on Technology Performance

    EPA Pesticide Factsheets

    In setting national emissions standards, EPA sets emissions performance levels rather than mandating use of a particular technology. The law mandates that EPA use numerical performance standards whenever feasible in setting national emissions standards.

  10. A Meta-Analysis of Typhoid Diagnostic Accuracy Studies: A Recommendation to Adopt a Standardized Composite Reference

    PubMed Central

    Storey, Helen L.; Huang, Ying; Crudder, Chris; Golden, Allison; de los Santos, Tala; Hawkins, Kenneth

    2015-01-01

    Novel typhoid diagnostics currently under development have the potential to improve clinical care, surveillance, and the disease burden estimates that support vaccine introduction. Blood culture is most often used as the reference method to evaluate the accuracy of new typhoid tests; however, it is recognized to be an imperfect gold standard. If no single gold standard test exists, use of a composite reference standard (CRS) can improve estimation of diagnostic accuracy. Numerous studies have used a CRS to evaluate new typhoid diagnostics; however, there is no consensus on an appropriate CRS. In order to evaluate existing tests for use as a reference test or inclusion in a CRS, we performed a systematic review of the typhoid literature to include all index/reference test combinations observed. We described the landscape of comparisons performed, showed results of a meta-analysis on the accuracy of the more common combinations, and evaluated sources of variability based on study quality. This wide-ranging meta-analysis suggests that no single test has sufficiently good performance but some existing diagnostics may be useful as part of a CRS. Additionally, based on findings from the meta-analysis and a constructed numerical example demonstrating the use of CRS, we proposed necessary criteria and potential components of a typhoid CRS to guide future recommendations. Agreement and adoption by all investigators of a standardized CRS is requisite, and would improve comparison of new diagnostics across independent studies, leading to the identification of a better reference test and improved confidence in prevalence estimates. PMID:26566275

  11. A spectral mimetic least-squares method for the Stokes equations with no-slip boundary condition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerritsma, Marc; Bochev, Pavel

    Formulation of locally conservative least-squares finite element methods (LSFEMs) for the Stokes equations with the no-slip boundary condition has been a long standing problem. Existing LSFEMs that yield exactly divergence free velocities require non-standard boundary conditions (Bochev and Gunzburger, 2009 [3]), while methods that admit the no-slip condition satisfy the incompressibility equation only approximately (Bochev and Gunzburger, 2009 [4, Chapter 7]). Here we address this problem by proving a new non-standard stability bound for the velocity–vorticity–pressure Stokes system augmented with a no-slip boundary condition. This bound gives rise to a norm-equivalent least-squares functional in which the velocity can be approximatedmore » by div-conforming finite element spaces, thereby enabling a locally-conservative approximations of this variable. Here, we also provide a practical realization of the new LSFEM using high-order spectral mimetic finite element spaces (Kreeft et al., 2011) and report several numerical tests, which confirm its mimetic properties.« less

  12. Unified Computational Methods for Regression Analysis of Zero-Inflated and Bound-Inflated Data

    PubMed Central

    Yang, Yan; Simpson, Douglas

    2010-01-01

    Bounded data with excess observations at the boundary are common in many areas of application. Various individual cases of inflated mixture models have been studied in the literature for bound-inflated data, yet the computational methods have been developed separately for each type of model. In this article we use a common framework for computing these models, and expand the range of models for both discrete and semi-continuous data with point inflation at the lower boundary. The quasi-Newton and EM algorithms are adapted and compared for estimation of model parameters. The numerical Hessian and generalized Louis method are investigated as means for computing standard errors after optimization. Correlated data are included in this framework via generalized estimating equations. The estimation of parameters and effectiveness of standard errors are demonstrated through simulation and in the analysis of data from an ultrasound bioeffect study. The unified approach enables reliable computation for a wide class of inflated mixture models and comparison of competing models. PMID:20228950

  13. A spectral mimetic least-squares method for the Stokes equations with no-slip boundary condition

    DOE PAGES

    Gerritsma, Marc; Bochev, Pavel

    2016-03-22

    Formulation of locally conservative least-squares finite element methods (LSFEMs) for the Stokes equations with the no-slip boundary condition has been a long standing problem. Existing LSFEMs that yield exactly divergence free velocities require non-standard boundary conditions (Bochev and Gunzburger, 2009 [3]), while methods that admit the no-slip condition satisfy the incompressibility equation only approximately (Bochev and Gunzburger, 2009 [4, Chapter 7]). Here we address this problem by proving a new non-standard stability bound for the velocity–vorticity–pressure Stokes system augmented with a no-slip boundary condition. This bound gives rise to a norm-equivalent least-squares functional in which the velocity can be approximatedmore » by div-conforming finite element spaces, thereby enabling a locally-conservative approximations of this variable. Here, we also provide a practical realization of the new LSFEM using high-order spectral mimetic finite element spaces (Kreeft et al., 2011) and report several numerical tests, which confirm its mimetic properties.« less

  14. Prediction of Undsteady Flows in Turbomachinery Using the Linearized Euler Equations on Deforming Grids

    NASA Technical Reports Server (NTRS)

    Clark, William S.; Hall, Kenneth C.

    1994-01-01

    A linearized Euler solver for calculating unsteady flows in turbomachinery blade rows due to both incident gusts and blade motion is presented. The model accounts for blade loading, blade geometry, shock motion, and wake motion. Assuming that the unsteadiness in the flow is small relative to the nonlinear mean solution, the unsteady Euler equations can be linearized about the mean flow. This yields a set of linear variable coefficient equations that describe the small amplitude harmonic motion of the fluid. These linear equations are then discretized on a computational grid and solved using standard numerical techniques. For transonic flows, however, one must use a linear discretization which is a conservative linearization of the non-linear discretized Euler equations to ensure that shock impulse loads are accurately captured. Other important features of this analysis include a continuously deforming grid which eliminates extrapolation errors and hence, increases accuracy, and a new numerically exact, nonreflecting far-field boundary condition treatment based on an eigenanalysis of the discretized equations. Computational results are presented which demonstrate the computational accuracy and efficiency of the method and demonstrate the effectiveness of the deforming grid, far-field nonreflecting boundary conditions, and shock capturing techniques. A comparison of the present unsteady flow predictions to other numerical, semi-analytical, and experimental methods shows excellent agreement. In addition, the linearized Euler method presented requires one or two orders-of-magnitude less computational time than traditional time marching techniques making the present method a viable design tool for aeroelastic analyses.

  15. Fast large scale structure perturbation theory using one-dimensional fast Fourier transforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmittfull, Marcel; Vlah, Zvonimir; McDonald, Patrick

    The usual fluid equations describing the large-scale evolution of mass density in the universe can be written as local in the density, velocity divergence, and velocity potential fields. As a result, the perturbative expansion in small density fluctuations, usually written in terms of convolutions in Fourier space, can be written as a series of products of these fields evaluated at the same location in configuration space. Based on this, we establish a new method to numerically evaluate the 1-loop power spectrum (i.e., Fourier transform of the 2-point correlation function) with one-dimensional fast Fourier transforms. This is exact and a fewmore » orders of magnitude faster than previously used numerical approaches. Numerical results of the new method are in excellent agreement with the standard quadrature integration method. This fast model evaluation can in principle be extended to higher loop order where existing codes become painfully slow. Our approach follows by writing higher order corrections to the 2-point correlation function as, e.g., the correlation between two second-order fields or the correlation between a linear and a third-order field. These are then decomposed into products of correlations of linear fields and derivatives of linear fields. In conclusion, the method can also be viewed as evaluating three-dimensional Fourier space convolutions using products in configuration space, which may also be useful in other contexts where similar integrals appear.« less

  16. Fast large scale structure perturbation theory using one-dimensional fast Fourier transforms

    DOE PAGES

    Schmittfull, Marcel; Vlah, Zvonimir; McDonald, Patrick

    2016-05-01

    The usual fluid equations describing the large-scale evolution of mass density in the universe can be written as local in the density, velocity divergence, and velocity potential fields. As a result, the perturbative expansion in small density fluctuations, usually written in terms of convolutions in Fourier space, can be written as a series of products of these fields evaluated at the same location in configuration space. Based on this, we establish a new method to numerically evaluate the 1-loop power spectrum (i.e., Fourier transform of the 2-point correlation function) with one-dimensional fast Fourier transforms. This is exact and a fewmore » orders of magnitude faster than previously used numerical approaches. Numerical results of the new method are in excellent agreement with the standard quadrature integration method. This fast model evaluation can in principle be extended to higher loop order where existing codes become painfully slow. Our approach follows by writing higher order corrections to the 2-point correlation function as, e.g., the correlation between two second-order fields or the correlation between a linear and a third-order field. These are then decomposed into products of correlations of linear fields and derivatives of linear fields. In conclusion, the method can also be viewed as evaluating three-dimensional Fourier space convolutions using products in configuration space, which may also be useful in other contexts where similar integrals appear.« less

  17. Iterative discrete ordinates solution of the equation for surface-reflected radiance

    NASA Astrophysics Data System (ADS)

    Radkevich, Alexander

    2017-11-01

    This paper presents a new method of numerical solution of the integral equation for the radiance reflected from an anisotropic surface. The equation relates the radiance at the surface level with BRDF and solutions of the standard radiative transfer problems for a slab with no reflection on its surfaces. It is also shown that the kernel of the equation satisfies the condition of the existence of a unique solution and the convergence of the successive approximations to that solution. The developed method features two basic steps: discretization on a 2D quadrature, and solving the resulting system of algebraic equations with successive over-relaxation method based on the Gauss-Seidel iterative process. Presented numerical examples show good coincidence between the surface-reflected radiance obtained with DISORT and the proposed method. Analysis of contributions of the direct and diffuse (but not yet reflected) parts of the downward radiance to the total solution is performed. Together, they represent a very good initial guess for the iterative process. This fact ensures fast convergence. The numerical evidence is given that the fastest convergence occurs with the relaxation parameter of 1 (no relaxation). An integral equation for BRDF is derived as inversion of the original equation. The potential of this new equation for BRDF retrievals is analyzed. The approach is found not viable as the BRDF equation appears to be an ill-posed problem, and it requires knowledge the surface-reflected radiance on the entire domain of both Sun and viewing zenith angles.

  18. Application of ISO standard 27048: dose assessment for the monitoring of workers for internal radiation exposure.

    PubMed

    Henrichs, K

    2011-03-01

    Besides ongoing developments in the dosimetry of incorporated radionuclides, there are various efforts to improve the monitoring of workers for potential or real intakes of radionuclides. The disillusioning experience with numerous intercomparison projects identified substantial differences between national regulations, concepts, applied programmes and methods, and dose assessment procedures. Measured activities were not directly comparable because of significant differences between measuring frequencies and methods, but also results of case studies for dose assessments revealed differences of orders of magnitude. Besides the general common interest in reliable monitoring results, at least the cross-border activities of workers (e.g. nuclear power plant services) require consistent approaches and comparable results. The International Standardization Organization therefore initiated projects to standardise programmes for the monitoring of workers, the requirements for measuring laboratories and the processes for the quantitative evaluation of monitoring results in terms of internal assessed doses. The strength of the concepts applied by the international working group consists in a unified approach defining the requirements, databases and processes. This paper is intended to give a short introduction into the standardization project followed by a more detailed description of the dose assessment standard, which will be published in the very near future.

  19. Efficient Low Dissipative High Order Schemes for Multiscale MHD Flows

    NASA Technical Reports Server (NTRS)

    Sjoegreen, Bjoern; Yee, Helen C.; Mansour, Nagi (Technical Monitor)

    2002-01-01

    Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes for the compressible Euler and Navier-Stokes equations has been developed and verified by the authors and collaborators. These schemes are suitable for the problems in question. Basically, the scheme consists of sixth-order or higher non-dissipative spatial difference operators as the base scheme. To control the amount of numerical dissipation, multiresolution wavelets are used as sensors to adaptively limit the amount and to aid the selection and/or blending of the appropriate types of numerical dissipation to be used. Magnetohydrodynamics (MHD) waves play a key role in drag reduction in highly maneuverable high speed combat aircraft, in space weather forecasting, and in the understanding of the dynamics of the evolution of our solar system and the main sequence stars. Although there exist a few well-studied second and third-order high-resolution shock-capturing schemes for the MHD in the literature, these schemes are too diffusive and not practical for turbulence/combustion MHD flows. On the other hand, extension of higher than third-order high-resolution schemes to the MHD system of equations is not straightforward. Unlike the hydrodynamic equations, the inviscid MHD system is non-strictly hyperbolic with non-convex fluxes. The wave structures and shock types are different from their hydrodynamic counterparts. Many of the non-traditional hydrodynamic shocks are not fully understood. Consequently, reliable and highly accurate numerical schemes for multiscale MHD equations pose a great challenge to algorithm development. In addition, controlling the numerical error of the divergence free condition of the magnetic fields for high order methods has been a stumbling block. Lower order methods are not practical for the astrophysical problems in question. We propose to extend our hydrodynamics schemes to the MHD equations with several desired properties over commonly used MHD schemes.

  20. A Review of Natural Joint Systems and Numerical Investigation of Bio-Inspired GFRP-to-Steel Joints

    PubMed Central

    Avgoulas, Evangelos I.; Sutcliffe, Michael P. F.

    2016-01-01

    There are a great variety of joint types used in nature which can inspire engineering joints. In order to design such biomimetic joints, it is at first important to understand how biological joints work. A comprehensive literature review, considering natural joints from a mechanical point of view, was undertaken. This was used to develop a taxonomy based on the different methods/functions that nature successfully uses to attach dissimilar tissues. One of the key methods that nature uses to join dissimilar materials is a transitional zone of stiffness at the insertion site. This method was used to propose bio-inspired solutions with a transitional zone of stiffness at the joint site for several glass fibre reinforced plastic (GFRP) to steel adhesively bonded joint configurations. The transition zone was used to reduce the material stiffness mismatch of the joint parts. A numerical finite element model was used to identify the optimum variation in material stiffness that minimises potential failure of the joint. The best bio-inspired joints showed a 118% increase of joint strength compared to the standard joints. PMID:28773688

  1. Efficient Numerical Diagonalization of Hermitian 3 × 3 Matrices

    NASA Astrophysics Data System (ADS)

    Kopp, Joachim

    A very common problem in science is the numerical diagonalization of symmetric or hermitian 3 × 3 matrices. Since standard "black box" packages may be too inefficient if the number of matrices is large, we study several alternatives. We consider optimized implementations of the Jacobi, QL, and Cuppen algorithms and compare them with an alytical method relying on Cardano's formula for the eigenvalues and on vector cross products for the eigenvectors. Jacobi is the most accurate, but also the slowest method, while QL and Cuppen are good general purpose algorithms. The analytical algorithm outperforms the others by more than a factor of 2, but becomes inaccurate or may even fail completely if the matrix entries differ greatly in magnitude. This can mostly be circumvented by using a hybrid method, which falls back to QL if conditions are such that the analytical calculation might become too inaccurate. For all algorithms, we give an overview of the underlying mathematical ideas, and present detailed benchmark results. C and Fortran implementations of our code are available for download from .

  2. Consistent lattice Boltzmann methods for incompressible axisymmetric flows

    NASA Astrophysics Data System (ADS)

    Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Yin, Linmao; Zhao, Ya; Chew, Jia Wei

    2016-08-01

    In this work, consistent lattice Boltzmann (LB) methods for incompressible axisymmetric flows are developed based on two efficient axisymmetric LB models available in the literature. In accord with their respective original models, the proposed axisymmetric models evolve within the framework of the standard LB method and the source terms contain no gradient calculations. Moreover, the incompressibility conditions are realized with the Hermite expansion, thus the compressibility errors arising in the existing models are expected to be reduced by the proposed incompressible models. In addition, an extra relaxation parameter is added to the Bhatnagar-Gross-Krook collision operator to suppress the effect of the ghost variable and thus the numerical stability of the present models is significantly improved. Theoretical analyses, based on the Chapman-Enskog expansion and the equivalent moment system, are performed to derive the macroscopic equations from the LB models and the resulting truncation terms (i.e., the compressibility errors) are investigated. In addition, numerical validations are carried out based on four well-acknowledged benchmark tests and the accuracy and applicability of the proposed incompressible axisymmetric LB models are verified.

  3. A boundary integral equation method using auxiliary interior surface approach for acoustic radiation and scattering in two dimensions.

    PubMed

    Yang, S A

    2002-10-01

    This paper presents an effective solution method for predicting acoustic radiation and scattering fields in two dimensions. The difficulty of the fictitious characteristic frequency is overcome by incorporating an auxiliary interior surface that satisfies certain boundary condition into the body surface. This process gives rise to a set of uniquely solvable boundary integral equations. Distributing monopoles with unknown strengths over the body and interior surfaces yields the simple source formulation. The modified boundary integral equations are further transformed to ordinary ones that contain nonsingular kernels only. This implementation allows direct application of standard quadrature formulas over the entire integration domain; that is, the collocation points are exactly the positions at which the integration points are located. Selecting the interior surface is an easy task. Moreover, only a few corresponding interior nodal points are sufficient for the computation. Numerical calculations consist of the acoustic radiation and scattering by acoustically hard elliptic and rectangular cylinders. Comparisons with analytical solutions are made. Numerical results demonstrate the efficiency and accuracy of the current solution method.

  4. A Review of Natural Joint Systems and Numerical Investigation of Bio-Inspired GFRP-to-Steel Joints.

    PubMed

    Avgoulas, Evangelos I; Sutcliffe, Michael P F

    2016-07-12

    There are a great variety of joint types used in nature which can inspire engineering joints. In order to design such biomimetic joints, it is at first important to understand how biological joints work. A comprehensive literature review, considering natural joints from a mechanical point of view, was undertaken. This was used to develop a taxonomy based on the different methods/functions that nature successfully uses to attach dissimilar tissues. One of the key methods that nature uses to join dissimilar materials is a transitional zone of stiffness at the insertion site. This method was used to propose bio-inspired solutions with a transitional zone of stiffness at the joint site for several glass fibre reinforced plastic (GFRP) to steel adhesively bonded joint configurations. The transition zone was used to reduce the material stiffness mismatch of the joint parts. A numerical finite element model was used to identify the optimum variation in material stiffness that minimises potential failure of the joint. The best bio-inspired joints showed a 118% increase of joint strength compared to the standard joints.

  5. Quantitative analysis of time-resolved microwave conductivity data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reid, Obadiah G.; Moore, David T.; Li, Zhen

    Flash-photolysis time-resolved microwave conductivity (fp-TRMC) is a versatile, highly sensitive technique for studying the complex photoconductivity of solution, solid, and gas-phase samples. The purpose of this paper is to provide a standard reference work for experimentalists interested in using microwave conductivity methods to study functional electronic materials, describing how to conduct and calibrate these experiments in order to obtain quantitative results. The main focus of the paper is on calculating the calibration factor, K, which is used to connect the measured change in microwave power absorption to the conductance of the sample. We describe the standard analytical formulae that havemore » been used in the past, and compare them to numerical simulations. This comparison shows that the most widely used analytical analysis of fp-TRMC data systematically under-estimates the transient conductivity by ~60%. We suggest a more accurate semi-empirical way of calibrating these experiments. However, we emphasize that the full numerical calculation is necessary to quantify both transient and steady-state conductance for arbitrary sample properties and geometry.« less

  6. Quantitative analysis of time-resolved microwave conductivity data

    DOE PAGES

    Reid, Obadiah G.; Moore, David T.; Li, Zhen; ...

    2017-11-10

    Flash-photolysis time-resolved microwave conductivity (fp-TRMC) is a versatile, highly sensitive technique for studying the complex photoconductivity of solution, solid, and gas-phase samples. The purpose of this paper is to provide a standard reference work for experimentalists interested in using microwave conductivity methods to study functional electronic materials, describing how to conduct and calibrate these experiments in order to obtain quantitative results. The main focus of the paper is on calculating the calibration factor, K, which is used to connect the measured change in microwave power absorption to the conductance of the sample. We describe the standard analytical formulae that havemore » been used in the past, and compare them to numerical simulations. This comparison shows that the most widely used analytical analysis of fp-TRMC data systematically under-estimates the transient conductivity by ~60%. We suggest a more accurate semi-empirical way of calibrating these experiments. However, we emphasize that the full numerical calculation is necessary to quantify both transient and steady-state conductance for arbitrary sample properties and geometry.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solis, S. E.; Centro de Investigacion e Instrumentacion e Imagenologia Medica, Universidad Autonoma Metropolitana Iztapalapa, Mexico, DF 09340; Hernandez, J. A.

    Arrays of antennas have been widely accepted for magnetic resonance imaging applications due to their high signal-to-noise ratio (SNR) over large volumes of interest. A new surface coil based on the magnetron tube and called slotted surface coil, has been recently introduced by our group. This coil design experimentally demonstrated a significant improvement over the circular-shaped coil when used in the receive-only mode. The slotted coils formed a two-sheet structure with a 90 deg. separation and each coil had 6 circular slots. Numerical simulations were performed using the finite element method for this coil design to study the behaviour ofmore » the array magnetic field. Then, we developed a two-coil array for brain magnetic resonance imaging to be operated at the resonant frequency of 170 MHz in the transceiver mode. Phantom images were acquired with our coil array and standard pulse sequences on a research-dedicated 4 Tesla scanner. Numerical simulations demonstrated that electromagnetic interaction between the coil elements is negligible, and that the magnetic field showed a good uniformity. In vitro images showed the feasibility of this coil array for standard pulses for high field magnetic resonance imaging.« less

  8. Groundwater-surface water interactions across scales in a boreal landscape investigated using a numerical modelling approach

    NASA Astrophysics Data System (ADS)

    Jutebring Sterte, Elin; Johansson, Emma; Sjöberg, Ylva; Huseby Karlsen, Reinert; Laudon, Hjalmar

    2018-05-01

    Groundwater and surface-water interactions are regulated by catchment characteristics and complex inter- and intra-annual variations in climatic conditions that are not yet fully understood. Our objective was to investigate the influence of catchment characteristics and freeze-thaw processes on surface and groundwater interactions in a boreal landscape, the Krycklan catchment in Sweden. We used a numerical modelling approach and sub-catchment evaluation method to identify and evaluate fundamental catchment characteristics and processes. The model reproduced observed stream discharge patterns of the 14 sub-catchments and the dynamics of the 15 groundwater wells with an average accumulated discharge error of 1% (15% standard deviation) and an average groundwater-level mean error of 0.1 m (0.23 m standard deviation). We show how peatland characteristics dampen the effect of intense rain, and how soil freeze-thaw processes regulate surface and groundwater partitioning during snowmelt. With these results, we demonstrate the importance of defining, understanding and quantifying the role of landscape heterogeneity and sub-catchment characteristics for accurately representing catchment hydrological functioning.

  9. Low-sensitivity H ∞ filter design for linear delta operator systems with sampling time jitter

    NASA Astrophysics Data System (ADS)

    Guo, Xiang-Gui; Yang, Guang-Hong

    2012-04-01

    This article is concerned with the problem of designing H ∞ filters for a class of linear discrete-time systems with low-sensitivity to sampling time jitter via delta operator approach. Delta-domain model is used to avoid the inherent numerical ill-condition resulting from the use of the standard shift-domain model at high sampling rates. Based on projection lemma in combination with the descriptor system approach often used to solve problems related to delay, a novel bounded real lemma with three slack variables for delta operator systems is presented. A sensitivity approach based on this novel lemma is proposed to mitigate the effects of sampling time jitter on system performance. Then, the problem of designing a low-sensitivity filter can be reduced to a convex optimisation problem. An important consideration in the design of correlation filters is the optimal trade-off between the standard H ∞ criterion and the sensitivity of the transfer function with respect to sampling time jitter. Finally, a numerical example demonstrating the validity of the proposed design method is given.

  10. Numerical evaluation of discontinuous and nonconforming finite element methods in nonlinear solid mechanics

    NASA Astrophysics Data System (ADS)

    Bayat, Hamid Reza; Krämer, Julian; Wunderlich, Linus; Wulfinghoff, Stephan; Reese, Stefanie; Wohlmuth, Barbara; Wieners, Christian

    2018-03-01

    This work presents a systematic study of discontinuous and nonconforming finite element methods for linear elasticity, finite elasticity, and small strain plasticity. In particular, we consider new hybrid methods with additional degrees of freedom on the skeleton of the mesh and allowing for a local elimination of the element-wise degrees of freedom. We show that this process leads to a well-posed approximation scheme. The quality of the new methods with respect to locking and anisotropy is compared with standard and in addition locking-free conforming methods as well as established (non-) symmetric discontinuous Galerkin methods with interior penalty. For several benchmark configurations, we show that all methods converge asymptotically for fine meshes and that in many cases the hybrid methods are more accurate for a fixed size of the discrete system.

  11. Numerical calculation of thermo-mechanical problems at large strains based on complex step derivative approximation of tangent stiffness matrices

    NASA Astrophysics Data System (ADS)

    Balzani, Daniel; Gandhi, Ashutosh; Tanaka, Masato; Schröder, Jörg

    2015-05-01

    In this paper a robust approximation scheme for the numerical calculation of tangent stiffness matrices is presented in the context of nonlinear thermo-mechanical finite element problems and its performance is analyzed. The scheme extends the approach proposed in Kim et al. (Comput Methods Appl Mech Eng 200:403-413, 2011) and Tanaka et al. (Comput Methods Appl Mech Eng 269:454-470, 2014 and bases on applying the complex-step-derivative approximation to the linearizations of the weak forms of the balance of linear momentum and the balance of energy. By incorporating consistent perturbations along the imaginary axis to the displacement as well as thermal degrees of freedom, we demonstrate that numerical tangent stiffness matrices can be obtained with accuracy up to computer precision leading to quadratically converging schemes. The main advantage of this approach is that contrary to the classical forward difference scheme no round-off errors due to floating-point arithmetics exist within the calculation of the tangent stiffness. This enables arbitrarily small perturbation values and therefore leads to robust schemes even when choosing small values. An efficient algorithmic treatment is presented which enables a straightforward implementation of the method in any standard finite-element program. By means of thermo-elastic and thermo-elastoplastic boundary value problems at finite strains the performance of the proposed approach is analyzed.

  12. A High-Resolution Godunov Method for Compressible Multi-Material Flow on Overlapping Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banks, J W; Schwendeman, D W; Kapila, A K

    2006-02-13

    A numerical method is described for inviscid, compressible, multi-material flow in two space dimensions. The flow is governed by the multi-material Euler equations with a general mixture equation of state. Composite overlapping grids are used to handle complex flow geometry and block-structured adaptive mesh refinement (AMR) is used to locally increase grid resolution near shocks and material interfaces. The discretization of the governing equations is based on a high-resolution Godunov method, but includes an energy correction designed to suppress numerical errors that develop near a material interface for standard, conservative shock-capturing schemes. The energy correction is constructed based on amore » uniform pressure-velocity flow and is significant only near the captured interface. A variety of two-material flows are presented to verify the accuracy of the numerical approach and to illustrate its use. These flows assume an equation of state for the mixture based on Jones-Wilkins-Lee (JWL) forms for the components. This equation of state includes a mixture of ideal gases as a special case. Flow problems considered include unsteady one-dimensional shock-interface collision, steady interaction of an planar interface and an oblique shock, planar shock interaction with a collection of gas-filled cylindrical inhomogeneities, and the impulsive motion of the two-component mixture in a rigid cylindrical vessel.« less

  13. Two-dimensional numerical simulation of flow around three-stranded rope

    NASA Astrophysics Data System (ADS)

    Wang, Xinxin; Wan, Rong; Huang, Liuyi; Zhao, Fenfang; Sun, Peng

    2016-08-01

    Three-stranded rope is widely used in fishing gear and mooring system. Results of numerical simulation are presented for flow around a three-stranded rope in uniform flow. The simulation was carried out to study the hydrodynamic characteristics of pressure and velocity fields of steady incompressible laminar and turbulent wakes behind a three-stranded rope. A three-cylinder configuration and single circular cylinder configuration are used to model the three-stranded rope in the two-dimensional simulation. The governing equations, Navier-Stokes equations, are solved by using two-dimensional finite volume method. The turbulence flow is simulated using Standard κ-ɛ model and Shear-Stress Transport κ-ω (SST) model. The drag of the three-cylinder model and single cylinder model is calculated for different Reynolds numbers by using control volume analysis method. The pressure coefficient is also calculated for the turbulent model and laminar model based on the control surface method. From the comparison of the drag coefficient and the pressure of the single cylinder and three-cylinder models, it is found that the drag coefficients of the three-cylinder model are generally 1.3-1.5 times those of the single circular cylinder for different Reynolds numbers. Comparing the numerical results with water tank test data, the results of the three-cylinder model are closer to the experiment results than the single cylinder model results.

  14. Valx: A System for Extracting and Structuring Numeric Lab Test Comparison Statements from Text.

    PubMed

    Hao, Tianyong; Liu, Hongfang; Weng, Chunhua

    2016-05-17

    To develop an automated method for extracting and structuring numeric lab test comparison statements from text and evaluate the method using clinical trial eligibility criteria text. Leveraging semantic knowledge from the Unified Medical Language System (UMLS) and domain knowledge acquired from the Internet, Valx takes seven steps to extract and normalize numeric lab test expressions: 1) text preprocessing, 2) numeric, unit, and comparison operator extraction, 3) variable identification using hybrid knowledge, 4) variable - numeric association, 5) context-based association filtering, 6) measurement unit normalization, and 7) heuristic rule-based comparison statements verification. Our reference standard was the consensus-based annotation among three raters for all comparison statements for two variables, i.e., HbA1c and glucose, identified from all of Type 1 and Type 2 diabetes trials in ClinicalTrials.gov. The precision, recall, and F-measure for structuring HbA1c comparison statements were 99.6%, 98.1%, 98.8% for Type 1 diabetes trials, and 98.8%, 96.9%, 97.8% for Type 2 diabetes trials, respectively. The precision, recall, and F-measure for structuring glucose comparison statements were 97.3%, 94.8%, 96.1% for Type 1 diabetes trials, and 92.3%, 92.3%, 92.3% for Type 2 diabetes trials, respectively. Valx is effective at extracting and structuring free-text lab test comparison statements in clinical trial summaries. Future studies are warranted to test its generalizability beyond eligibility criteria text. The open-source Valx enables its further evaluation and continued improvement among the collaborative scientific community.

  15. Maximizing power output from continuous-wave single-frequency fiber amplifiers.

    PubMed

    Ward, Benjamin G

    2015-02-15

    This Letter reports on a method of maximizing the power output from highly saturated cladding-pumped continuous-wave single-frequency fiber amplifiers simultaneously, taking into account the stimulated Brillouin scattering and transverse modal instability thresholds. This results in a design figure of merit depending on the fundamental mode overlap with the doping profile, the peak Brillouin gain coefficient, and the peak mode coupling gain coefficient. This figure of merit is then numerically analyzed for three candidate fiber designs including standard, segmented acoustically tailored, and micro-segmented acoustically tailored photonic-crystal fibers. It is found that each of the latter two fibers should enable a 50% higher output power than standard photonic crystal fiber.

  16. A direct and fast method to monitor lipid oxidation progress in model fatty acid methyl esters by high-performance size-exclusion chromatography.

    PubMed

    Márquez-Ruiz, G; Holgado, F; García-Martínez, M C; Dobarganes, M C

    2007-09-21

    A new method based on high-performance size-exclusion chromatography (HPSEC) is proposed to quantitate primary and secondary oxidation compounds in model fatty acid methyl esters (FAMEs). The method consists on simply injecting an aliquot sample in HPSEC, without preliminary isolation procedures neither addition of standard internal. Four groups of compounds can be quantified, namely, unoxidised FAME, oxidised FAME monomers including hydroperoxides, FAME dimers and FAME polymers. Results showed high repeatability and sensitivity, and substantial advantages versus determination of residual substrate by gas-liquid chromatography. Applicability of the method is shown through selected data obtained by numerous oxidation experiments on pure FAME, mainly methyl linoleate, at ambient and moderate temperatures.

  17. Molecular testing for clinical diagnosis and epidemiological investigations of intestinal parasitic infections.

    PubMed

    Verweij, Jaco J; Stensvold, C Rune

    2014-04-01

    Over the past few decades, nucleic acid-based methods have been developed for the diagnosis of intestinal parasitic infections. Advantages of nucleic acid-based methods are numerous; typically, these include increased sensitivity and specificity and simpler standardization of diagnostic procedures. DNA samples can also be stored and used for genetic characterization and molecular typing, providing a valuable tool for surveys and surveillance studies. A variety of technologies have been applied, and some specific and general pitfalls and limitations have been identified. This review provides an overview of the multitude of methods that have been reported for the detection of intestinal parasites and offers some guidance in applying these methods in the clinical laboratory and in epidemiological studies.

  18. Molecular Testing for Clinical Diagnosis and Epidemiological Investigations of Intestinal Parasitic Infections

    PubMed Central

    Stensvold, C. Rune

    2014-01-01

    SUMMARY Over the past few decades, nucleic acid-based methods have been developed for the diagnosis of intestinal parasitic infections. Advantages of nucleic acid-based methods are numerous; typically, these include increased sensitivity and specificity and simpler standardization of diagnostic procedures. DNA samples can also be stored and used for genetic characterization and molecular typing, providing a valuable tool for surveys and surveillance studies. A variety of technologies have been applied, and some specific and general pitfalls and limitations have been identified. This review provides an overview of the multitude of methods that have been reported for the detection of intestinal parasites and offers some guidance in applying these methods in the clinical laboratory and in epidemiological studies. PMID:24696439

  19. First-Principles Lattice Dynamics Method for Strongly Anharmonic Crystals

    NASA Astrophysics Data System (ADS)

    Tadano, Terumasa; Tsuneyuki, Shinji

    2018-04-01

    We review our recent development of a first-principles lattice dynamics method that can treat anharmonic effects nonperturbatively. The method is based on the self-consistent phonon theory, and temperature-dependent phonon frequencies can be calculated efficiently by incorporating recent numerical techniques to estimate anharmonic force constants. The validity of our approach is demonstrated through applications to cubic strontium titanate, where overall good agreement with experimental data is obtained for phonon frequencies and lattice thermal conductivity. We also show the feasibility of highly accurate calculations based on a hybrid exchange-correlation functional within the present framework. Our method provides a new way of studying lattice dynamics in severely anharmonic materials where the standard harmonic approximation and the perturbative approach break down.

  20. A surrogate accelerated multicanonical Monte Carlo method for uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Wu, Keyi; Li, Jinglai

    2016-09-01

    In this work we consider a class of uncertainty quantification problems where the system performance or reliability is characterized by a scalar parameter y. The performance parameter y is random due to the presence of various sources of uncertainty in the system, and our goal is to estimate the probability density function (PDF) of y. We propose to use the multicanonical Monte Carlo (MMC) method, a special type of adaptive importance sampling algorithms, to compute the PDF of interest. Moreover, we develop an adaptive algorithm to construct local Gaussian process surrogates to further accelerate the MMC iterations. With numerical examples we demonstrate that the proposed method can achieve several orders of magnitudes of speedup over the standard Monte Carlo methods.

  1. The structured menstrual history: developing a tool to facilitate diagnosis and aid in symptom management.

    PubMed

    Matteson, Kristen A; Munro, Malcolm G; Fraser, Ian S

    2011-09-01

    Abnormal uterine bleeding (AUB) is a prevalent symptom that encompasses abnormalities in menstrual regularity, duration, frequency and/or volume, and it is encountered frequently by both primary care physicians and obstetrician-gynecologists. Research on AUB has used numerous methods to measure bleeding and assess symptoms, but the lack of universally accepted outcome measures hinder the quality of research and the ability of clinical investigators to collaborate in multicenter trials. Similarly, clinical care for women reporting heavy, prolonged, or irregular menstrual bleeding is not optimized because standard ways of evaluating symptoms and change in symptoms over time do not exist. This article describes (1) the current methods of evaluating women with AUB, both in research and clinical care; and (2) offers suggestions for the development of a standardized structured menstrual history for use in both research and clinical care. © Thieme Medical Publishers.

  2. An investigation of dynamic-analysis methods for variable-geometry structures

    NASA Technical Reports Server (NTRS)

    Austin, F.

    1980-01-01

    Selected space structure configurations were reviewed in order to define dynamic analysis problems associated with variable geometry. The dynamics of a beam being constructed from a flexible base and the relocation of the completed beam by rotating the remote manipulator system about the shoulder joint were selected. Equations of motion were formulated in physical coordinates for both of these problems, and FORTRAN programs were developed to generate solutions by numerically integrating the equations. These solutions served as a standard of comparison to gauge the accuracy of approximate solution techniques that were developed and studied. Good control was achieved in both problems. Unstable control system coupling with the system flexibility did not occur. An approximate method was developed for each problem to enable the analyst to investigate variable geometry effects during a short time span using standard fixed geometry programs such as NASTRAN. The average angle and average length techniques are discussed.

  3. Model of Numerical Spatial Classification for Sustainable Agriculture in Badung Regency and Denpasar City, Indonesia

    NASA Astrophysics Data System (ADS)

    Trigunasih, N. M.; Lanya, I.; Subadiyasa, N. N.; Hutauruk, J.

    2018-02-01

    Increasing number and activity of the population to meet the needs of their lives greatly affect the utilization of land resources. Land needs for activities of the population continue to grow, while the availability of land is limited. Therefore, there will be changes in land use. As a result, the problems faced by land degradation and conversion of agricultural land become non-agricultural. The objectives of this research are: (1) to determine parameter of spatial numerical classification of sustainable food agriculture in Badung Regency and Denpasar City (2) to know the projection of food balance in Badung Regency and Denpasar City in 2020, 2030, 2040, and 2050 (3) to specify of function of spatial numerical classification in the making of zonation model of sustainable agricultural land area in Badung regency and Denpasar city (4) to determine the appropriate model of the area to protect sustainable agricultural land in spatial and time scale in Badung and Denpasar regencies. The method used in this research was quantitative method include: survey, soil analysis, spatial data development, geoprocessing analysis (spatial analysis of overlay and proximity analysis), interpolation of raster digital elevation model data, and visualization (cartography). Qualitative methods consisted of literature studies, and interviews. The parameters observed for a total of 11 parameters Badung regency and Denpasar as much as 9 parameters. Numerical classification parameter analysis results used the standard deviation and the mean of the population data and projections relationship rice field in the food balance sheet by modelling. The result of the research showed that, the number of different numerical classification parameters in rural areas (Badung) and urban areas (Denpasar), in urban areas the number of parameters is less than the rural areas. The based on numerical classification weighting and scores generate population distribution parameter analysis results of a standard deviation and average value. Numerical classification produced 5 models, which was divided into three zones are sustainable neighbourhood, buffer and converted in Denpasar and Badung. The results of Population curve parameter analysis in Denpasar showed normal curve, in contrast to the Badung regency showed abnormal curve, therefore Denpasar modeling carried out throughout the region, while in the Badung regency modeling done in each district. Relationship modelling and projections lands role in food balance in Badung views of sustainable land area whereas in Denpasar seen from any connection to the green open spaces in the spatial plan Denpasar 2011-2031. Modelling in Badung (rural) is different in Denpasar (urban), as well as population curve parameter analysis results in Badung showed abnormal curve while in Denpasar showed normal curve. Relationship modelling and projections lands role in food balance in the Badung regency sustainable in terms of land area, while in Denpasar in terms of linkages with urban green space in Denpasar City’s regional landuse plan of 2011-2031.

  4. New generation all-silica based optical elements for high power laser systems

    NASA Astrophysics Data System (ADS)

    Tolenis, T.; GrinevičiÅ«tÄ--, L.; Melninkaitis, A.; Selskis, A.; Buzelis, R.; MažulÄ--, L.; Drazdys, R.

    2017-08-01

    Laser resistance of optical elements is one of the major topics in photonics. Various routes have been taken to improve optical coatings, including, but not limited by, materials engineering and optimisation of electric field distribution in multilayers. During the decades of research, it was found, that high band-gap materials, such as silica, are highly resistant to laser light. Unfortunately, only the production of anti-reflection coatings of all-silica materials are presented to this day. A novel route will be presented in materials engineering, capable to manufacture high reflection optical elements using only SiO2 material and GLancing Angle Deposition (GLAD) method. The technique involves the deposition of columnar structure and tailoring the refractive index of silica material throughout the coating thickness. A numerous analysis indicate the superior properties of GLAD coatings when compared with standard methods for Bragg mirrors production. Several groups of optical components are presented including anti-reflection coatings and Bragg mirrors. Structural and optical characterisation of the method have been performed and compared with standard methods. All researches indicate the possibility of new generation coatings for high power laser systems.

  5. CBR anisotropy from primordial gravitational waves in inflationary cosmologies

    NASA Astrophysics Data System (ADS)

    Allen, Bruce; Koranda, Scott

    1994-09-01

    We examine stochastic temperature fluctuations of the cosmic background radiation (CBR) arising via the Sachs-Wolfe effect from gravitational wave perturbations produced in the early Universe. These temperature fluctuations are described by an angular correlation function C(γ). A new (more concise and general) derivation of C(γ) is given, and evaluated for inflationary-universe cosmologies. This yields standard results for angles γ greater than a few degrees, but new results for smaller angles, because we do not make standard long-wavelength approximations to the gravitational wave mode functions. The function C(γ) may be expanded in a series of Legendre polynomials; we use numerical methods to compare the coefficients of the resulting expansion in our exact calculation with standard (approximate) results. We also report some progress towards finding a closed form expression for C(γ).

  6. Simple method for the generation of multiple homogeneous field volumes inside the bore of superconducting magnets

    PubMed Central

    Chou, Ching-Yu; Ferrage, Fabien; Aubert, Guy; Sakellariou, Dimitris

    2015-01-01

    Standard Magnetic Resonance magnets produce a single homogeneous field volume, where the analysis is performed. Nonetheless, several modern applications could benefit from the generation of multiple homogeneous field volumes along the axis and inside the bore of the magnet. In this communication, we propose a straightforward method using a combination of ring structures of permanent magnets in order to cancel the gradient of the stray field in a series of distinct volumes. These concepts were demonstrated numerically on an experimentally measured magnetic field profile. We discuss advantages and limitations of our method and present the key steps required for an experimental validation. PMID:26182891

  7. Transient loads analysis for space flight applications

    NASA Technical Reports Server (NTRS)

    Thampi, S. K.; Vidyasagar, N. S.; Ganesan, N.

    1992-01-01

    A significant part of the flight readiness verification process involves transient analysis of the coupled Shuttle-payload system to determine the low frequency transient loads. This paper describes a methodology for transient loads analysis and its implementation for the Spacelab Life Sciences Mission. The analysis is carried out using two major software tools - NASTRAN and an external FORTRAN code called EZTRAN. This approach is adopted to overcome some of the limitations of NASTRAN's standard transient analysis capabilities. The method uses Data Recovery Matrices (DRM) to improve computational efficiency. The mode acceleration method is fully implemented in the DRM formulation to recover accurate displacements, stresses, and forces. The advantages of the method are demonstrated through a numerical example.

  8. The method of fundamental solutions for computing acoustic interior transmission eigenvalues

    NASA Astrophysics Data System (ADS)

    Kleefeld, Andreas; Pieronek, Lukas

    2018-03-01

    We analyze the method of fundamental solutions (MFS) in two different versions with focus on the computation of approximate acoustic interior transmission eigenvalues in 2D for homogeneous media. Our approach is mesh- and integration free, but suffers in general from the ill-conditioning effects of the discretized eigenoperator, which we could then successfully balance using an approved stabilization scheme. Our numerical examples cover many of the common scattering objects and prove to be very competitive in accuracy with the standard methods for PDE-related eigenvalue problems. We finally give an approximation analysis for our framework and provide error estimates, which bound interior transmission eigenvalue deviations in terms of some generalized MFS output.

  9. Numerical modeling of exciton-polariton Bose-Einstein condensate in a microcavity

    NASA Astrophysics Data System (ADS)

    Voronych, Oksana; Buraczewski, Adam; Matuszewski, Michał; Stobińska, Magdalena

    2017-06-01

    A novel, optimized numerical method of modeling of an exciton-polariton superfluid in a semiconductor microcavity was proposed. Exciton-polaritons are spin-carrying quasiparticles formed from photons strongly coupled to excitons. They possess unique properties, interesting from the point of view of fundamental research as well as numerous potential applications. However, their numerical modeling is challenging due to the structure of nonlinear differential equations describing their evolution. In this paper, we propose to solve the equations with a modified Runge-Kutta method of 4th order, further optimized for efficient computations. The algorithms were implemented in form of C++ programs fitted for parallel environments and utilizing vector instructions. The programs form the EPCGP suite which has been used for theoretical investigation of exciton-polaritons. Catalogue identifier: AFBQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFBQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: BSD-3 No. of lines in distributed program, including test data, etc.: 2157 No. of bytes in distributed program, including test data, etc.: 498994 Distribution format: tar.gz Programming language: C++ with OpenMP extensions (main numerical program), Python (helper scripts). Computer: Modern PC (tested on AMD and Intel processors), HP BL2x220. Operating system: Unix/Linux and Windows. Has the code been vectorized or parallelized?: Yes (OpenMP) RAM: 200 MB for single run Classification: 7, 7.7. Nature of problem: An exciton-polariton superfluid is a novel, interesting physical system allowing investigation of high temperature Bose-Einstein condensation of exciton-polaritons-quasiparticles carrying spin. They have brought a lot of attention due to their unique properties and potential applications in polariton-based optoelectronic integrated circuits. This is an out-of-equilibrium quantum system confined within a semiconductor microcavity. It is described by a set of nonlinear differential equations similar in spirit to the Gross-Pitaevskii (GP) equation, but their unique properties do not allow standard GP solving frameworks to be utilized. Finding an accurate and efficient numerical algorithm as well as development of optimized numerical software is necessary for effective theoretical investigation of exciton-polaritons. Solution method: A Runge-Kutta method of 4th order was employed to solve the set of differential equations describing exciton-polariton superfluids. The method was fitted for the exciton-polariton equations and further optimized. The C++ programs utilize OpenMP extensions and vector operations in order to fully utilize the computer hardware. Running time: 6h for 100 ps evolution, depending on the values of parameters

  10. Computer analysis of multicircuit shells of revolution by the field method

    NASA Technical Reports Server (NTRS)

    Cohen, G. A.

    1975-01-01

    The field method, presented previously for the solution of even-order linear boundary value problems defined on one-dimensional open branch domains, is extended to boundary value problems defined on one-dimensional domains containing circuits. This method converts the boundary value problem into two successive numerically stable initial value problems, which may be solved by standard forward integration techniques. In addition, a new method for the treatment of singular boundary conditions is presented. This method, which amounts to a partial interchange of the roles of force and displacement variables, is problem independent with respect to both accuracy and speed of execution. This method was implemented in a computer program to calculate the static response of ring stiffened orthotropic multicircuit shells of revolution to asymmetric loads. Solutions are presented for sample problems which illustrate the accuracy and efficiency of the method.

  11. Indicators of Arctic Sea Ice Bistability in Climate Model Simulations and Observations

    DTIC Science & Technology

    2014-09-30

    ultimately developed a novel mathematical method to solve the system of equations involving the addition of a numerical “ ghost ” layer, as described in the...balance models ( EBMs ) and (ii) seasonally-varying single-column models (SCMs). As described in Approach item #1, we developed an idealized model that...includes both latitudinal and seasonal variations (Fig. 1). The model reduces to a standard EBM or SCM as limiting cases in the parameter space, thus

  12. Use of the routing procedure to study dye and gas transport in the West Fork Trinity River, Texas

    USGS Publications Warehouse

    Jobson, Harvey E.; Rathbun, R.E.

    1984-01-01

    Rhodamine-WT dye, ethylene, and propane were injected at three sites along a 21.6-kilometer reach of the West Fork Trinity River below Fort Worth, Texas. Complete dye concentration versus time curves and peak gas concentrations were measured at three cross sections below each injection. The peak dye concentrations were located and samples were collected at about three-hour intervals for as many as six additional cross sections. These data were analyzed to determine the longitudinal dispersion coefficients as well as the gas desorption coefficients using both standard techniques and a numerical routing procedure. The routing procedure, using a Lagrangian transport model to minimize numerical dispersion, provided better estimates of the dispersion coefficient than did the method of moments. At a steady flow of about 0.76 m2/s, the dispersion coefficient varied from about 0.7 m2/s in a reach contained within a single deep pool to about 2.0 m2/s in a reach containing riffles and small pools. The bulk desorption coefficients computed using the routing procedure and the standard peak method were essentially the same. The liquid film coefficient could also be obtained using the routing procedure. Both the bulk desorption coefficient and the liquid film coefficient were much smaller in the pooled reach than in the reaches containing riffles.

  13. What can Numerical Computation do for the History of Science? (Study of an Orbit Drawn by Newton on a Letter to Hooke)

    NASA Astrophysics Data System (ADS)

    Stuchi, Teresa; Cardozo Dias, P.

    2013-05-01

    Abstract (2,250 Maximum Characters): On a letter to Robert Hooke, Isaac Newton drew the orbit of a mass moving under a constant attracting central force. How he drew the orbit may indicate how and when he developed dynamic categories. Some historians claim that Newton used a method contrived by Hooke; others that he used some method of curvature. We prove geometrically: Hooke’s method is a second order symplectic area preserving algorithm, and the method of curvature is a first order algorithm without special features; then we integrate the hamiltonian equations. Integration by the method of curvature can also be done exploring geometric properties of curves. We compare three methods: Hooke’s method, the method of curvature and a first order method. A fourth order algorithm sets a standard of comparison. We analyze which of these methods best explains Newton’s drawing.

  14. What can numerical computation do for the history of science? (a study of an orbit drawn by Newton in a letter to Hooke)

    NASA Astrophysics Data System (ADS)

    Cardozo Dias, Penha Maria; Stuchi, T. J.

    2013-11-01

    In a letter to Robert Hooke, Isaac Newton drew the orbit of a mass moving under a constant attracting central force. The drawing of the orbit may indicate how and when Newton developed dynamic categories. Some historians claim that Newton used a method contrived by Hooke; others that he used some method of curvature. We prove that Hooke’s method is a second-order symplectic area-preserving algorithm, and the method of curvature is a first-order algorithm without special features; then we integrate the Hamiltonian equations. Integration by the method of curvature can also be done, exploring the geometric properties of curves. We compare three methods: Hooke’s method, the method of curvature and a first-order method. A fourth-order algorithm sets a standard of comparison. We analyze which of these methods best explains Newton’s drawing.

  15. Numerical modeling of interface displacement in heterogeneously wetting porous media

    NASA Astrophysics Data System (ADS)

    Hiller, T.; Brinkmann, M.; Herminghaus, S.

    2013-12-01

    We use the mesoscopic particle method stochastic rotation dynamics (SRD) to simulate immiscible multi-phase flow on the pore and sub-pore scale in three dimensions. As an extension to the standard SRD method, we present an approach on implementing complex wettability on heterogeneous surfaces. We use 3D SRD to simulate immiscible two-phase flow through a model porous medium (disordered packing of spherical beads) where the substrate exhibits different spatial wetting patterns. The simulations are designed to resemble experimental measurements of capillary pressure saturation. We show that the correlation length of the wetting patterns influences the temporal evolution of the interface and thus percolation, residual saturation and work dissipated during the fluid displacement. Our numerical results are in qualitatively good agreement with the experimental data. Besides of modeling flow in porous media, our SRD implementation allows us to address various questions of interfacial dynamics, e.g. the formation of capillary bridges between spherical beads or droplets in microfluidic applications to name only a few.

  16. On polynomial preconditioning for indefinite Hermitian matrices

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.

    1989-01-01

    The minimal residual method is studied combined with polynomial preconditioning for solving large linear systems (Ax = b) with indefinite Hermitian coefficient matrices (A). The standard approach for choosing the polynomial preconditioners leads to preconditioned systems which are positive definite. Here, a different strategy is studied which leaves the preconditioned coefficient matrix indefinite. More precisely, the polynomial preconditioner is designed to cluster the positive, resp. negative eigenvalues of A around 1, resp. around some negative constant. In particular, it is shown that such indefinite polynomial preconditioners can be obtained as the optimal solutions of a certain two parameter family of Chebyshev approximation problems. Some basic results are established for these approximation problems and a Remez type algorithm is sketched for their numerical solution. The problem of selecting the parameters such that the resulting indefinite polynomial preconditioners speeds up the convergence of minimal residual method optimally is also addressed. An approach is proposed based on the concept of asymptotic convergence factors. Finally, some numerical examples of indefinite polynomial preconditioners are given.

  17. Chemotherapy Extravasation Management: 21-Year Experience.

    PubMed

    Onesti, Maria Giuseppina; Carella, Sara; Fioramonti, Paolo; Scuderi, Nicolò

    2017-11-01

    Chemotherapy extravasation may result in serious damage to patients, with irreversible local injures and disability. Evidence-based standardization on extravasation management is lacking and many institutions do not practice adequate procedures to prevent the severer damages. Our aim was to explore the prevention and treatment of extravasation injuries, proposing a standard therapeutic protocol together with a review of the literature. From January 1994 to December 2015, 545 cases were reviewed (age range, 5-87 years; 282 men and 263 women). Our therapeutic protocol consisted of local infiltration of saline solution and topical occlusive applications of corticosteroids. The infiltrations were administrated 3 to 6 times a week depending on damage severity. Our protocol allowed us to prevent ulceration in 373 cases. Only 27 patients required surgery (escarectomy, skin graft, regional, and free flap). Numerous treatments have been proposed in literature. The antidotes have been discussed controversially and are not considered standard methods for treatment, especially when polychemotherapy is administrated and the identification of the responsible drug is not possible. We proposed the use of saline solution injection to dilute rapidly the drug, thus reducing its local toxic effects. This method is easy to use and always reproducible even when the drug is not known or when it is administrated in combination with other drugs. It is possible to perform it in ambulatory regimen, and, overall, it represents a standard method.

  18. A GENERAL MASS-CONSERVATIVE NUMERICAL SOLUTION FOR THE UNSATURATED FLOW EQUATION

    EPA Science Inventory

    Numerical approximations based on different forms of the governing partial differential equation can lead to significantly different results for unsaturated flow problems. Numerical solution based on the standard h-based form of Richards equation generally yields poor results, ch...

  19. Experimental testing and numerical simulation on natural composite for aerospace applications

    NASA Astrophysics Data System (ADS)

    Kumar, G. Raj; Vijayanandh, R.; Kumar, M. Senthil; Kumar, S. Sathish

    2018-05-01

    Nowadays polymers are commonly used in various applications, which make it difficult to avoid its usage even though it causes environmental problems. Natural fibers are best alternate to overcome the polymer based environmental issues. Natural fibers play an important role in developing high performing fully newline biodegradable green composites which will be a key material to solve environmental problems in future. In this paper deals the properties analysis of banana fiber is combined with epoxy resin in order to create a natural composite, which has special characteristics for aerospace applications. The objective of this paper is to investigate the characteristics of failure modes and strength of natural composite using experimental and numerical methods. The test specimen of natural composite has been fabricated as per ASTM standard, which undergoes tensile and compression tests using Tinius Olsen UTM in order to determine mechanical and physical properties. The reference model has been designed by CATIA, and then numerical simulation has been carried out by Ansys Workbench 16.2 for the given boundary conditions.

  20. Numerical and experimental research on pentagonal cross-section of the averaging Pitot tube

    NASA Astrophysics Data System (ADS)

    Zhang, Jili; Li, Wei; Liang, Ruobing; Zhao, Tianyi; Liu, Yacheng; Liu, Mingsheng

    2017-07-01

    Averaging Pitot tubes have been widely used in many fields because of their simple structure and stable performance. This paper introduces a new shape of the cross-section of an averaging Pitot tube. Firstly, the structure of the averaging Pitot tube and the distribution of pressure taps are given. Then, a mathematical model of the airflow around it is formulated. After that, a series of numerical simulations are carried out to optimize the geometry of the tube. The distribution of the streamline and pressures around the tube are given. To test its performance, a test platform was constructed in accordance with the relevant national standards and is described in this paper. Curves are provided, linking the values of flow coefficient with the values of Reynolds number. With a maximum deviation of only  ±3%, the results of the flow coefficient obtained from the numerical simulations were in agreement with those obtained from experimental methods. The proposed tube has a stable flow coefficient and favorable metrological characteristics.

  1. Three Dimensional Constraint Effects on the Estimated (Delta)CTOD during the Numerical Simulation of Different Fatigue Threshold Testing Techniques

    NASA Technical Reports Server (NTRS)

    Seshadri, Banavara R.; Smith, Stephen W.

    2007-01-01

    Variation in constraint through the thickness of a specimen effects the cyclic crack-tip-opening displacement (DELTA CTOD). DELTA CTOD is a valuable measure of crack growth behavior, indicating closure development, constraint variations and load history effects. Fatigue loading with a continual load reduction was used to simulate the load history associated with fatigue crack growth threshold measurements. The constraint effect on the estimated DELTA CTOD is studied by carrying out three-dimensional elastic-plastic finite element simulations. The analysis involves numerical simulation of different standard fatigue threshold test schemes to determine how each test scheme affects DELTA CTOD. The American Society for Testing and Materials (ASTM) prescribes standard load reduction procedures for threshold testing using either the constant stress ratio (R) or constant maximum stress intensity (K(sub max)) methods. Different specimen types defined in the standard, namely the compact tension, C(T), and middle cracked tension, M(T), specimens were used in this simulation. The threshold simulations were conducted with different initial K(sub max) values to study its effect on estimated DELTA CTOD. During each simulation, the DELTA CTOD was estimated at every load increment during the load reduction procedure. Previous numerical simulation results indicate that the constant R load reduction method generates a plastic wake resulting in remote crack closure during unloading. Upon reloading, this remote contact location was observed to remain in contact well after the crack tip was fully open. The final region to open is located at the point at which the load reduction was initiated and at the free surface of the specimen. However, simulations carried out using the constant Kmax load reduction procedure did not indicate remote crack closure. Previous analysis results using various starting K(sub max) values and different load reduction rates have indicated DELTA CTOD is independent of specimen size. A study of the effect of specimen thickness and geometry on the measured DELTA CTOD for various load reduction procedures and its implication in the estimation of fatigue crack growth threshold values is discussed.

  2. Quantile regression via vector generalized additive models.

    PubMed

    Yee, Thomas W

    2004-07-30

    One of the most popular methods for quantile regression is the LMS method of Cole and Green. The method naturally falls within a penalized likelihood framework, and consequently allows for considerable flexible because all three parameters may be modelled by cubic smoothing splines. The model is also very understandable: for a given value of the covariate, the LMS method applies a Box-Cox transformation to the response in order to transform it to standard normality; to obtain the quantiles, an inverse Box-Cox transformation is applied to the quantiles of the standard normal distribution. The purposes of this article are three-fold. Firstly, LMS quantile regression is presented within the framework of the class of vector generalized additive models. This confers a number of advantages such as a unifying theory and estimation process. Secondly, a new LMS method based on the Yeo-Johnson transformation is proposed, which has the advantage that the response is not restricted to be positive. Lastly, this paper describes a software implementation of three LMS quantile regression methods in the S language. This includes the LMS-Yeo-Johnson method, which is estimated efficiently by a new numerical integration scheme. The LMS-Yeo-Johnson method is illustrated by way of a large cross-sectional data set from a New Zealand working population. Copyright 2004 John Wiley & Sons, Ltd.

  3. An optimized knife-edge method for on-orbit MTF estimation of optical sensors using powell parameter fitting

    NASA Astrophysics Data System (ADS)

    Han, Lu; Gao, Kun; Gong, Chen; Zhu, Zhenyu; Guo, Yue

    2017-08-01

    On-orbit Modulation Transfer Function (MTF) is an important indicator to evaluate the performance of the optical remote sensors in a satellite. There are many methods to estimate MTF, such as pinhole method, slit method and so on. Among them, knife-edge method is quite efficient, easy-to-use and recommended in ISO12233 standard for the wholefrequency MTF curve acquisition. However, the accuracy of the algorithm is affected by Edge Spread Function (ESF) fitting accuracy significantly, which limits the range of application. So in this paper, an optimized knife-edge method using Powell algorithm is proposed to improve the ESF fitting precision. Fermi function model is the most popular ESF fitting model, yet it is vulnerable to the initial values of the parameters. Considering the characteristics of simple and fast convergence, Powell algorithm is applied to fit the accurate parameters adaptively with the insensitivity to the initial parameters. Numerical simulation results reveal the accuracy and robustness of the optimized algorithm under different SNR, edge direction and leaning angles conditions. Experimental results using images of the camera in ZY-3 satellite show that this method is more accurate than the standard knife-edge method of ISO12233 in MTF estimation.

  4. Hydroelastic slamming response in the evolution of a flip-through event during shallow-liquid sloshing

    NASA Astrophysics Data System (ADS)

    Lugni, C.; Bardazzi, A.; Faltinsen, O. M.; Graziani, G.

    2014-03-01

    The evolution of a flip-through event [6] upon a vertical, deformable wall during shallow-water sloshing in a 2D tank is analyzed, with specific focus on the role of hydroelasticity. An aluminium plate, whose dimensions are Froude-scaled in order to reproduce the first wet natural frequency associated with the typical structural panel of a Mark III containment system, is used. (Mark III Containment System is a membrane-type tank used in the Liquefied Natural Gas (LNG) carrier to contain the LNG. A typical structural panel is composed by two metallic membranes and two independent thermal insulation layers. The first membrane contains the LNG, the second one ensures redundancy in case of leakage.) Such a system is clamped to a fully rigid vertical wall of the tank at the vertical ends while being kept free on its lateral sides. Hence, in a 2D flow approximation the system can be suitably modelled, as a double-clamped Euler beam, with the Euler beam theory. The hydroelastic effects are assessed by cross-analyzing the experimental data based both on the images recorded by a fast camera, and on the strain measurements along the deformable panel and on the pressure measurements on the rigid wall below the elastic plate. The same experiments are also carried out by substituting the deformable plate with a fully stiff panel. The pressure transducers are mounted at the same positions of the strain gauges used for the deformable plate. The comparison between the results of rigid and elastic case allows to better define the role of hydroelasticity. The analysis has identified three different regimes characterizing the hydroelastic evolution: a quasi-static deformation of the beam (regime I) precedes a strongly hydroelastic behavior (regime II), for which the added mass effects are relevant; finally, the free-vibration phase (regime III) occurs. A hybrid method, combining numerical modelling and experimental data from the tests with fully rigid plate is proposed to examine the hydroelastic effects. Within this approach, the measurements provide the experimental loads acting on the rigid plate, while the numerical solution enables a more detailed analysis, by giving additional information not available from the experimental tests. More in detail, an Euler beam equation is used to model numerically the plate with the added-mass contribution estimated in time. In this way the resulting hybrid method accounts for the variation of the added mass associated with the instantaneous wetted length of the beam, estimated from the experimental images. Moreover, the forcing hydrodynamic load is prescribed by using the experimental pressure distribution measured in the rigid case. The experimental data for the elastic beam are compared with the numerical results of the hybrid model and with those of the standard methods used at the design stage. The comparison against the experimental data shows an overall satisfactory prediction of the hybrid model. The maximum peak pressure predicted by the standard methods agrees with the result of the hybrid model only when the added mass effect is considered. However, the standard methods are not able to properly estimate the temporal evolution of the plate deformation.

  5. Interaction Entropy: A New Paradigm for Highly Efficient and Reliable Computation of Protein-Ligand Binding Free Energy.

    PubMed

    Duan, Lili; Liu, Xiao; Zhang, John Z H

    2016-05-04

    Efficient and reliable calculation of protein-ligand binding free energy is a grand challenge in computational biology and is of critical importance in drug design and many other molecular recognition problems. The main challenge lies in the calculation of entropic contribution to protein-ligand binding or interaction systems. In this report, we present a new interaction entropy method which is theoretically rigorous, computationally efficient, and numerically reliable for calculating entropic contribution to free energy in protein-ligand binding and other interaction processes. Drastically different from the widely employed but extremely expensive normal mode method for calculating entropy change in protein-ligand binding, the new method calculates the entropic component (interaction entropy or -TΔS) of the binding free energy directly from molecular dynamics simulation without any extra computational cost. Extensive study of over a dozen randomly selected protein-ligand binding systems demonstrated that this interaction entropy method is both computationally efficient and numerically reliable and is vastly superior to the standard normal mode approach. This interaction entropy paradigm introduces a novel and intuitive conceptual understanding of the entropic effect in protein-ligand binding and other general interaction systems as well as a practical method for highly efficient calculation of this effect.

  6. Lightning Charge Retrievals: Dimensional Reduction, LDAR Constraints, and a First Comparison w/ LIS Satellite Data

    NASA Technical Reports Server (NTRS)

    Koshak, William; Krider, E. Philip; Murray, Natalie; Boccippio, Dennis

    2007-01-01

    A "dimensional reduction" (DR) method is introduced for analyzing lightning field changes whereby the number of unknowns in a discrete two-charge model is reduced from the standard eight to just four. The four unknowns are found by performing a numerical minimization of a chi-squared goodness-of-fit function. At each step of the minimization, an Overdetermined Fixed Matrix (OFM) method is used to immediately retrieve the best "residual source". In this way, all 8 parameters are found, yet a numerical search of only 4 parameters is required. The inversion method is applied to the understanding of lightning charge retrievals. The accuracy of the DR method has been assessed by comparing retrievals with data provided by the Lightning Detection And Ranging (LDAR) instrument. Because lightning effectively deposits charge within thundercloud charge centers and because LDAR traces the geometrical development of the lightning channel with high precision, the LDAR data provides an ideal constraint for finding the best model charge solutions. In particular, LDAR data can be used to help determine both the horizontal and vertical positions of the model charges, thereby eliminating dipole ambiguities. The results of the LDAR-constrained charge retrieval method have been compared to the locations of optical pulses/flash locations detected by the Lightning Imaging Sensor (LIS).

  7. Solving the transient water age distribution problem in environmental flow systems

    NASA Astrophysics Data System (ADS)

    Cornaton, F. J.

    2011-12-01

    The temporal evolution of groundwater age and its frequency distributions can display important changes as flow regimes vary due to the natural change in climate and hydrologic conditions and/or to human induced pressures on the resource to satisfy the water demand. Groundwater age being nowadays frequently used to investigate reservoir properties and recharge conditions, special attention needs to be put on the way this property is characterized, would it be using isotopic methods, multiple tracer techniques, or mathematical modelling. Steady-state age frequency distributions can be modelled using standard numerical techniques, since the general balance equation describing age transport under steady-state flow conditions is exactly equivalent to a standard advection-dispersion equation. The time-dependent problem is however described by an extended transport operator that incorporates an additional coordinate for water age. The consequence is that numerical solutions can hardly be achieved, especially for real 3-D applications over large time periods of interest. The absence of any robust method has thus left us in the quantitative hydrogeology community dodging the issue of transience. Novel algorithms for solving the age distribution problem under time-varying flow regimes are presented and, for some specific configurations, extended to the problem of generalized component exposure time. The solution strategy is based on the combination of the Laplace Transform technique applied to the age (or exposure time) coordinate with standard time-marching schemes. The method is well-suited for groundwater problems with possible density-dependency of fluid flow (e.g. coupled flow and heat/salt concentration problems), but also presents significance to the homogeneous flow (compressible case) problem. The approach is validated using 1-D analytical solutions and exercised on some demonstration problems that are relevant to topical issues in groundwater age, including analysis of transfer times in the vadose zone, aquifer-aquitard interactions and the induction of transient age distributions when a well pump is started.

  8. Transformation-cost time-series method for analyzing irregularly sampled data

    NASA Astrophysics Data System (ADS)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G. Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations—with associated costs—to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  9. Transformation-cost time-series method for analyzing irregularly sampled data.

    PubMed

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations-with associated costs-to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  10. Numerical investigation of airflow in an idealised human extra-thoracic airway: a comparison study

    PubMed Central

    Chen, Jie; Gutmark, Ephraim

    2013-01-01

    Large eddy simulation (LES) technique is employed to numerically investigate the airflow through an idealised human extra-thoracic airway under different breathing conditions, 10 l/min, 30 l/min, and 120 l/min. The computational results are compared with single and cross hot-wire measurements, and with time-averaged flow field computed by standard k-ω and k-ω-SST Reynolds averaged Navier-Stokes (RANS) models and the Lattice-Boltzmann method (LBM). The LES results are also compared to root-mean-square (RMS) flow field computed by the Reynolds stress model (RSM) and LBM. LES generally gives better prediction of the time-averaged flow field than RANS models and LBM. LES also provides better estimation of the RMS flow field than both the RSM and the LBM. PMID:23619907

  11. Databases and coordinated research projects at the IAEA on atomic processes in plasmas

    NASA Astrophysics Data System (ADS)

    Braams, Bastiaan J.; Chung, Hyun-Kyung

    2012-05-01

    The Atomic and Molecular Data Unit at the IAEA works with a network of national data centres to encourage and coordinate production and dissemination of fundamental data for atomic, molecular and plasma-material interaction (A+M/PMI) processes that are relevant to the realization of fusion energy. The Unit maintains numerical and bibliographical databases and has started a Wiki-style knowledge base. The Unit also contributes to A+M database interface standards and provides a search engine that offers a common interface to multiple numerical A+M/PMI databases. Coordinated Research Projects (CRPs) bring together fusion energy researchers and atomic, molecular and surface physicists for joint work towards the development of new data and new methods. The databases and current CRPs on A+M/PMI processes are briefly described here.

  12. Some issues and subtleties in numerical simulation of X-ray FEL's

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fawley, William M.

    Part of the overall design effort for x-ray FEL's such as the LCLS and TESLA projects has involved extensive use of particle simulation codes to predict their output performance and underlying sensitivity to various input parameters (e.g. electron beam emittance). This paper discusses some of the numerical issues that must be addressed by simulation codes in this regime. We first give a brief overview of the standard approximations and simulation methods adopted by time-dependent(i.e. polychromatic) codes such as GINGER, GENESIS, and FAST3D, including the effects of temporal discretization and the resultant limited spectral bandpass,and then discuss the accuracies and inaccuraciesmore » of these codes in predicting incoherent spontaneous emission (i.e. the extremely low gain regime).« less

  13. Validation by numerical simulation of the behaviour of protective structures of machinery cabins subjected to standardized shocks

    NASA Astrophysics Data System (ADS)

    Dumitrache, P.; Goanţă, A. M.

    2017-08-01

    The ability of the cabins to insure the operator protection in the case of the shock loading that appears at the roll-over of the machine or when the cab is struck by the falling objects, it’s one of the most important performance criterions that it must comply by the machines and the mobile equipments. The experimental method provides the most accurate information on the behaviour of protective structures, but generates high costs due to experimental installations and structures which may be compromised during the experiments. In these circumstances, numerical simulation of the actual problem (mechanical shock applied to a strength structure) is a perfectly viable alternative, given that the hardware and software current performances provides the necessary support to obtain results with an acceptable level of accuracy. In this context, the paper proposes using FEA platforms for virtual testing of the actual strength structures of the cabins using their finite element models based on 3D models generated in CAD environments. In addition to the economic advantage above mentioned, although the results obtained by simulation using the finite element method are affected by a number of simplifying assumptions, the adequate modelling of the phenomenon can be a successful support in the design process of structures to meet safety performance criteria imposed by current standards. In the first section of the paper is presented the general context of the security performance requirements imposed by current standards on the cabins strength structures. The following section of the paper is dedicated to the peculiarities of finite element modelling in problems that impose simulation of the behaviour of structures subjected to shock loading. The final section of the paper is dedicated to a case study and to the future objectives.

  14. Numerical study on the Welander oscillatory natural circulation problem using high-order numerical methods

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Kim, Seung Jun

    2016-11-16

    In this study, the classical Welander’s oscillatory natural circulation problem is investigated using high-order numerical methods. As originally studied by Welander, the fluid motion in a differentially heated fluid loop can exhibit stable, weakly instable, and strongly instable modes. A theoretical stability map has also been originally derived from the stability analysis. Numerical results obtained in this paper show very good agreement with Welander’s theoretical derivations. For stable cases, numerical results from both the high-order and low-order numerical methods agree well with the non-dimensional flow rate analytically derived. The high-order numerical methods give much less numerical errors compared to themore » low-order methods. For stability analysis, the high-order numerical methods could perfectly predict the stability map, while the low-order numerical methods failed to do so. For all theoretically unstable cases, the low-order methods predicted them to be stable. The result obtained in this paper is a strong evidence to show the benefits of using high-order numerical methods over the low-order ones, when they are applied to simulate natural circulation phenomenon that has already gain increasing interests in many future nuclear reactor designs.« less

  15. Assessment of cleaning and disinfection in Salmonella-contaminated poultry layer houses using qualitative and semi-quantitative culture techniques.

    PubMed

    Wales, Andrew; Breslin, Mark; Davies, Robert

    2006-09-10

    Salmonella infection of laying flocks in the UK is predominantly a problem of the persistent contamination of layer houses and associated wildlife vectors by Salmonella Enteritidis. Methods for its control and elimination include effective cleaning and disinfection of layer houses between flocks, and it is important to be able to measure the success of such decontamination. A method for the environmental detection and semi-quantitative enumeration of salmonellae was used and compared with a standard qualitative method, in 12 Salmonella-contaminated caged layer houses before and after cleaning and disinfection. The quantitative technique proved to have comparable sensitivity to the standard method, and additionally provided insights into the numerical Salmonella challenge that replacement flocks would encounter. Elimination of S. Enteritidis was not achieved in any of the premises examined although substantial reductions in the prevalence and numbers of salmonellae were demonstrated, whilst in others an increase in contamination was observed after cleaning and disinfection. Particular problems with feeders and wildlife vectors were highlighted. The use of a quantitative method assisted the identification of problem areas, such as those with a high initial bacterial load or those experiencing only a modest reduction in bacterial count following decontamination.

  16. A new unconditionally stable and consistent quasi-analytical in-stream water quality solution scheme for CSTR-based water quality simulators

    NASA Astrophysics Data System (ADS)

    Woldegiorgis, Befekadu Taddesse; van Griensven, Ann; Pereira, Fernando; Bauwens, Willy

    2017-06-01

    Most common numerical solutions used in CSTR-based in-stream water quality simulators are susceptible to instabilities and/or solution inconsistencies. Usually, they cope with instability problems by adopting computationally expensive small time steps. However, some simulators use fixed computation time steps and hence do not have the flexibility to do so. This paper presents a novel quasi-analytical solution for CSTR-based water quality simulators of an unsteady system. The robustness of the new method is compared with the commonly used fourth-order Runge-Kutta methods, the Euler method and three versions of the SWAT model (SWAT2012, SWAT-TCEQ, and ESWAT). The performance of each method is tested for different hypothetical experiments. Besides the hypothetical data, a real case study is used for comparison. The growth factors we derived as stability measures for the different methods and the R-factor—considered as a consistency measure—turned out to be very useful for determining the most robust method. The new method outperformed all the numerical methods used in the hypothetical comparisons. The application for the Zenne River (Belgium) shows that the new method provides stable and consistent BOD simulations whereas the SWAT2012 model is shown to be unstable for the standard daily computation time step. The new method unconditionally simulates robust solutions. Therefore, it is a reliable scheme for CSTR-based water quality simulators that use first-order reaction formulations.

  17. Stochastic study of solute transport in a nonstationary medium.

    PubMed

    Hu, Bill X

    2006-01-01

    A Lagrangian stochastic approach is applied to develop a method of moment for solute transport in a physically and chemically nonstationary medium. Stochastic governing equations for mean solute flux and solute covariance are analytically obtained in the first-order accuracy of log conductivity and/or chemical sorption variances and solved numerically using the finite-difference method. The developed method, the numerical method of moments (NMM), is used to predict radionuclide solute transport processes in the saturated zone below the Yucca Mountain project area. The mean, variance, and upper bound of the radionuclide mass flux through a control plane 5 km downstream of the footprint of the repository are calculated. According to their chemical sorption capacities, the various radionuclear chemicals are grouped as nonreactive, weakly sorbing, and strongly sorbing chemicals. The NMM method is used to study their transport processes and influence factors. To verify the method of moments, a Monte Carlo simulation is conducted for nonreactive chemical transport. Results indicate the results from the two methods are consistent, but the NMM method is computationally more efficient than the Monte Carlo method. This study adds to the ongoing debate in the literature on the effect of heterogeneity on solute transport prediction, especially on prediction uncertainty, by showing that the standard derivation of solute flux is larger than the mean solute flux even when the hydraulic conductivity within each geological layer is mild. This study provides a method that may become an efficient calculation tool for many environmental projects.

  18. On the computation and updating of the modified Cholesky decomposition of a covariance matrix

    NASA Technical Reports Server (NTRS)

    Vanrooy, D. L.

    1976-01-01

    Methods for obtaining and updating the modified Cholesky decomposition (MCD) for the particular case of a covariance matrix when one is given only the original data are described. These methods are the standard method of forming the covariance matrix K then solving for the MCD, L and D (where K=LDLT); a method based on Householder reflections; and lastly, a method employing the composite-t algorithm. For many cases in the analysis of remotely sensed data, the composite-t method is the superior method despite the fact that it is the slowest one, since (1) the relative amount of time computing MCD's is often quite small, (2) the stability properties of it are the best of the three, and (3) it affords an efficient and numerically stable procedure for updating the MCD. The properties of these methods are discussed and FORTRAN programs implementing these algorithms are listed.

  19. A novel approach to evaluation of pest insect abundance in the presence of noise.

    PubMed

    Embleton, Nina; Petrovskaya, Natalia

    2014-03-01

    Evaluation of pest abundance is an important task of integrated pest management. It has recently been shown that evaluation of pest population size from discrete sampling data can be done by using the ideas of numerical integration. Numerical integration of the pest population density function is a computational technique that readily gives us an estimate of the pest population size, where the accuracy of the estimate depends on the number of traps installed in the agricultural field to collect the data. However, in a standard mathematical problem of numerical integration, it is assumed that the data are precise, so that the random error is zero when the data are collected. This assumption does not hold in ecological applications. An inherent random error is often present in field measurements, and therefore it may strongly affect the accuracy of evaluation. In our paper, we offer a novel approach to evaluate the pest insect population size under the assumption that the data about the pest population include a random error. The evaluation is not based on statistical methods but is done using a spatially discrete method of numerical integration where the data obtained by trapping as in pest insect monitoring are converted to values of the population density. It will be discussed in the paper how the accuracy of evaluation differs from the case where the same evaluation method is employed to handle precise data. We also consider how the accuracy of the pest insect abundance evaluation can be affected by noise when the data available from trapping are sparse. In particular, we show that, contrary to intuitive expectations, noise does not have any considerable impact on the accuracy of evaluation when the number of traps is small as is conventional in ecological applications.

  20. On a framework for generating PoD curves assisted by numerical simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Subair, S. Mohamed, E-mail: prajagopal@iitm.ac.in; Agrawal, Shweta, E-mail: prajagopal@iitm.ac.in; Balasubramaniam, Krishnan, E-mail: prajagopal@iitm.ac.in

    2015-03-31

    The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here wemore » develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.« less

  1. On a framework for generating PoD curves assisted by numerical simulations

    NASA Astrophysics Data System (ADS)

    Subair, S. Mohamed; Agrawal, Shweta; Balasubramaniam, Krishnan; Rajagopal, Prabhu; Kumar, Anish; Rao, Purnachandra B.; Tamanna, Jayakumar

    2015-03-01

    The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here we develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.

  2. Simulation of quasi-static hydraulic fracture propagation in porous media with XFEM

    NASA Astrophysics Data System (ADS)

    Juan-Lien Ramirez, Alina; Neuweiler, Insa; Löhnert, Stefan

    2015-04-01

    Hydraulic fracturing is the injection of a fracking fluid at high pressures into the underground. Its goal is to create and expand fracture networks to increase the rock permeability. It is a technique used, for example, for oil and gas recovery and for geothermal energy extraction, since higher rock permeability improves production. Many physical processes take place when it comes to fracking; rock deformation, fluid flow within the fractures, as well as into and through the porous rock. All these processes are strongly coupled, what makes its numerical simulation rather challenging. We present a 2D numerical model that simulates the hydraulic propagation of an embedded fracture quasi-statically in a poroelastic, fully saturated material. Fluid flow within the porous rock is described by Darcy's law and the flow within the fracture is approximated by a parallel plate model. Additionally, the effect of leak-off is taken into consideration. The solid component of the porous medium is assumed to be linear elastic and the propagation criteria are given by the energy release rate and the stress intensity factors [1]. The used numerical method for the spatial discretization is the eXtended Finite Element Method (XFEM) [2]. It is based on the standard Finite Element Method, but introduces additional degrees of freedom and enrichment functions to describe discontinuities locally in a system. Through them the geometry of the discontinuity (e.g. a fracture) becomes independent of the mesh allowing it to move freely through the domain without a mesh-adapting step. With this numerical model we are able to simulate hydraulic fracture propagation with different initial fracture geometries and material parameters. Results from these simulations will also be presented. References [1] D. Gross and T. Seelig. Fracture Mechanics with an Introduction to Micromechanics. Springer, 2nd edition, (2011) [2] T. Belytschko and T. Black. Elastic crack growth in finite elements with minimal remeshing. Int. J. Numer. Meth. Engng. 45, 601-620, (1999)

  3. Discontinuous Galerkin Method with Numerical Roe Flux for Spherical Shallow Water Equations

    NASA Astrophysics Data System (ADS)

    Yi, T.; Choi, S.; Kang, S.

    2013-12-01

    In developing the dynamic core of a numerical weather prediction model with discontinuous Galerkin method, a numerical flux at the boundaries of grid elements plays a vital role since it preserves the local conservation properties and has a significant impact on the accuracy and stability of numerical solutions. Due to these reasons, we developed the numerical Roe flux based on an approximate Riemann problem for spherical shallow water equations in Cartesian coordinates [1] to find out its stability and accuracy. In order to compare the performance with its counterpart flux, we used the Lax-Friedrichs flux, which has been used in many dynamic cores such as NUMA [1], CAM-DG [2] and MCore [3] because of its simplicity. The Lax-Friedrichs flux is implemented by a flux difference between left and right states plus the maximum characteristic wave speed across the boundaries of elements. It has been shown that the Lax-Friedrichs flux with the finite volume method is more dissipative and unstable than other numerical fluxes such as HLLC, AUSM+ and Roe. The Roe flux implemented in this study is based on the decomposition of flux difference over the element boundaries where the nonlinear equations are linearized. It is rarely used in dynamic cores due to its complexity and thus computational expensiveness. To compare the stability and accuracy of the Roe flux with the Lax-Friedrichs, two- and three-dimensional test cases are performed on a plane and cubed-sphere, respectively, with various numbers of element and polynomial order. For the two-dimensional case, the Gaussian bell is simulated on the plane with two different numbers of elements at the fixed polynomial orders. In three-dimensional cases on the cubed-sphere, we performed the test cases of a zonal flow over an isolated mountain and a Rossby-Haurwitz wave, of which initial conditions are the same as those of Williamson [4]. This study presented that the Roe flux with the discontinuous Galerkin method is less dissipative and has stronger numerical stability than the Lax-Friedrichs. Reference 1. 2002, Giraldo, F.X., Hesthaven, J.S. and Warburton, T., "Nodal High-Order Discontinous Galerkin Methods for the Spherical Shallow Water Equations," Journal of Computational Physics, Vol.181, pp.499-525. 2. 2005, Nair, R.D., Thomas, S.J. and Loft, R.D., "A Discontinuous Galerkin Transport Scheme on the Cubed Sphere," Monthly Weather Review, Vol.133, pp.814-828. 3. 2010, Ullrich, P.A., Jablonowski, C. and Leer, van B., "High-Order Finite-Volume Methods for the Shallow-Water Equations on the Sphere," Journal of Computational Physics, Vol.229, pp.6104-6134. 4. 1992, Williamson, D.L., Drake, J.B., Hack, J., Jacob, R. and Swartztrauber, P.N., "A Standard Test Set for Numerical Approximations to the Shallow Water Equations in Spherical Geometry," Journal of Computational Physics, Vol.102, pp.211-224.

  4. An adiabatic linearized path integral approach for quantum time-correlation functions II: a cumulant expansion method for improving convergence.

    PubMed

    Causo, Maria Serena; Ciccotti, Giovanni; Bonella, Sara; Vuilleumier, Rodolphe

    2006-08-17

    Linearized mixed quantum-classical simulations are a promising approach for calculating time-correlation functions. At the moment, however, they suffer from some numerical problems that may compromise their efficiency and reliability in applications to realistic condensed-phase systems. In this paper, we present a method that improves upon the convergence properties of the standard algorithm for linearized calculations by implementing a cumulant expansion of the relevant averages. The effectiveness of the new approach is tested by applying it to the challenging computation of the diffusion of an excess electron in a metal-molten salt solution.

  5. Numerical time evolution of ETH spin chains by means of matrix product density operators

    NASA Astrophysics Data System (ADS)

    White, Christopher; Zaletel, Michael; Mong, Roger; Refael, Gil

    We introduce a method for approximating density operators of 1D systems that, when combined with a standard framework for time evolution (TEBD), makes possible simulation of the dynamics of strongly thermalizing systems to arbitrary times. We demonstrate that the method works on both near-equilibrium initial states (Gibbs states with spatially varying temperatures) and far-from-equilibrium initial states, including quenches across phase transitions and pure states. This work was supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE \\x901144469 and by the Caltech IQIM, an NSF Physics Frontiers Center with support of the Gordon and Betty Moore.

  6. An Effective Hybrid Firefly Algorithm with Harmony Search for Global Numerical Optimization

    PubMed Central

    Guo, Lihong; Wang, Gai-Ge; Wang, Heqi; Wang, Dinan

    2013-01-01

    A hybrid metaheuristic approach by hybridizing harmony search (HS) and firefly algorithm (FA), namely, HS/FA, is proposed to solve function optimization. In HS/FA, the exploration of HS and the exploitation of FA are fully exerted, so HS/FA has a faster convergence speed than HS and FA. Also, top fireflies scheme is introduced to reduce running time, and HS is utilized to mutate between fireflies when updating fireflies. The HS/FA method is verified by various benchmarks. From the experiments, the implementation of HS/FA is better than the standard FA and other eight optimization methods. PMID:24348137

  7. A new exact and more powerful unconditional test of no treatment effect from binary matched pairs.

    PubMed

    Lloyd, Chris J

    2008-09-01

    We consider the problem of testing for a difference in the probability of success from matched binary pairs. Starting with three standard inexact tests, the nuisance parameter is first estimated and then the residual dependence is eliminated by maximization, producing what I call an E+M P-value. The E+M P-value based on McNemar's statistic is shown numerically to dominate previous suggestions, including partially maximized P-values as described in Berger and Sidik (2003, Statistical Methods in Medical Research 12, 91-108). The latter method, however, may have computational advantages for large samples.

  8. A finite-volume Eulerian-Lagrangian Localized Adjoint Method for solution of the advection-dispersion equation

    USGS Publications Warehouse

    Healy, R.W.; Russell, T.F.

    1993-01-01

    A new mass-conservative method for solution of the one-dimensional advection-dispersion equation is derived and discussed. Test results demonstrate that the finite-volume Eulerian-Lagrangian localized adjoint method (FVELLAM) outperforms standard finite-difference methods, in terms of accuracy and efficiency, for solute transport problems that are dominated by advection. For dispersion-dominated problems, the performance of the method is similar to that of standard methods. Like previous ELLAM formulations, FVELLAM systematically conserves mass globally with all types of boundary conditions. FVELLAM differs from other ELLAM approaches in that integrated finite differences, instead of finite elements, are used to approximate the governing equation. This approach, in conjunction with a forward tracking scheme, greatly facilitates mass conservation. The mass storage integral is numerically evaluated at the current time level, and quadrature points are then tracked forward in time to the next level. Forward tracking permits straightforward treatment of inflow boundaries, thus avoiding the inherent problem in backtracking, as used by most characteristic methods, of characteristic lines intersecting inflow boundaries. FVELLAM extends previous ELLAM results by obtaining mass conservation locally on Lagrangian space-time elements. Details of the integration, tracking, and boundary algorithms are presented. Test results are given for problems in Cartesian and radial coordinates.

  9. Problems and methods of calculating the Legendre functions of arbitrary degree and order

    NASA Astrophysics Data System (ADS)

    Novikova, Elena; Dmitrenko, Alexander

    2016-12-01

    The known standard recursion methods of computing the full normalized associated Legendre functions do not give the necessary precision due to application of IEEE754-2008 standard, that creates a problems of underflow and overflow. The analysis of the problems of the calculation of the Legendre functions shows that the problem underflow is not dangerous by itself. The main problem that generates the gross errors in its calculations is the problem named the effect of "absolute zero". Once appeared in a forward column recursion, "absolute zero" converts to zero all values which are multiplied by it, regardless of whether a zero result of multiplication is real or not. Three methods of calculating of the Legendre functions, that removed the effect of "absolute zero" from the calculations are discussed here. These methods are also of interest because they almost have no limit for the maximum degree of Legendre functions. It is shown that the numerical accuracy of these three methods is the same. But, the CPU calculation time of the Legendre functions with Fukushima method is minimal. Therefore, the Fukushima method is the best. Its main advantage is computational speed which is an important factor in calculation of such large amount of the Legendre functions as 2 401 336 for EGM2008.

  10. Projection of angular momentum via linear algebra

    DOE PAGES

    Johnson, Calvin W.; O'Mara, Kevin D.

    2017-12-01

    Projection of many-body states with good angular momentum from an initial state is usually accomplished by a three-dimensional integral. Here, we show how projection can instead be done by solving a straightforward system of linear equations. We demonstrate the method and give sample applications tomore » $$^{48}$$Cr and $$^{60}$$Fe in the $pf$ shell. This new projection scheme, which is competitive against the standard numerical quadrature, should also be applicable to other quantum numbers such as isospin and particle number.« less

  11. Projection of angular momentum via linear algebra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Calvin W.; O'Mara, Kevin D.

    Projection of many-body states with good angular momentum from an initial state is usually accomplished by a three-dimensional integral. Here, we show how projection can instead be done by solving a straightforward system of linear equations. We demonstrate the method and give sample applications tomore » $$^{48}$$Cr and $$^{60}$$Fe in the $pf$ shell. This new projection scheme, which is competitive against the standard numerical quadrature, should also be applicable to other quantum numbers such as isospin and particle number.« less

  12. Captain upgrade CRM training: A new focus for enhanced flight operations

    NASA Technical Reports Server (NTRS)

    Taggart, William R.

    1993-01-01

    Crew Resource Management (CRM) research has resulted in numerous payoffs of applied applications in flight training and standardization of air carrier flight operations. This paper describes one example of how basic research into human factors and crew performance was used to create a specific training intervention for upgrading new captains for a major United States air carrier. The basis for the training is examined along with some of the specific training methods used, and several unexpeced results.

  13. A Localized Ensemble Kalman Smoother

    NASA Technical Reports Server (NTRS)

    Butala, Mark D.

    2012-01-01

    Numerous geophysical inverse problems prove difficult because the available measurements are indirectly related to the underlying unknown dynamic state and the physics governing the system may involve imperfect models or unobserved parameters. Data assimilation addresses these difficulties by combining the measurements and physical knowledge. The main challenge in such problems usually involves their high dimensionality and the standard statistical methods prove computationally intractable. This paper develops and addresses the theoretical convergence of a new high-dimensional Monte-Carlo approach called the localized ensemble Kalman smoother.

  14. Extremely Fast Numerical Integration of Ocean Surface Wave Dynamics: Building Blocks for a Higher Order Method

    DTIC Science & Technology

    2006-09-30

    equation known as the Kadomtsev - Petviashvili (KP) equation ): (ηt + coηx +αηηx + βη )x +γηyy = 0 (4) where γ = co / 2 . The KdV equation ...using the spectral formulation of the Kadomtsev - Petviashvili equation , a standard equation for nonlinear, shallow water wave dynamics that is a... Petviashvili and nonlinear Schroedinger equations and higher order corrections have been developed as prerequisites to coding the Boussinesq and Euler

  15. Automatic Scaling of Digisonde Ionograms Computer Program and Numerical Analysis Documentation,

    DTIC Science & Technology

    1983-02-01

    and Huang, 1983). This method is ideally suited for autoscaled results as discussed in Reference 1. The results of ARTIST are outputted to a standard... ARTIST , the autoscaling routine has been tested on some 8000 ionograms from Goose Bay, Labrador, for the months January, April, July, and September of 1980...Automatic scaling of Digisonde ionograms by-~a ’R-4 ARTIST system is discussed and ref erence is made to the ARTIST’s success in scaling over 8000

  16. Discretization chaos - Feedback control and transition to chaos

    NASA Technical Reports Server (NTRS)

    Grantham, Walter J.; Athalye, Amit M.

    1990-01-01

    Problems in the design of feedback controllers for chaotic dynamical systems are considered theoretically, focusing on two cases where chaos arises only when a nonchaotic continuous-time system is discretized into a simpler discrete-time systems (exponential discretization and pseudo-Euler integration applied to Lotka-Volterra competition and prey-predator systems). Numerical simulation results are presented in extensive graphs and discussed in detail. It is concluded that care must be taken in applying standard dynamical-systems methods to control systems that may be discontinuous or nondifferentiable.

  17. Projection of angular momentum via linear algebra

    NASA Astrophysics Data System (ADS)

    Johnson, Calvin W.; O'Mara, Kevin D.

    2017-12-01

    Projection of many-body states with good angular momentum from an initial state is usually accomplished by a three-dimensional integral. We show how projection can instead be done by solving a straightforward system of linear equations. We demonstrate the method and give sample applications to 48Cr and 60Fe in the p f shell. This new projection scheme, which is competitive against the standard numerical quadrature, should also be applicable to other quantum numbers such as isospin and particle number.

  18. Multi-Dimensional Asymptotically Stable 4th Order Accurate Schemes for the Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Abarbanel, Saul; Ditkowski, Adi

    1996-01-01

    An algorithm is presented which solves the multi-dimensional diffusion equation on co mplex shapes to 4th-order accuracy and is asymptotically stable in time. This bounded-error result is achieved by constructing, on a rectangular grid, a differentiation matrix whose symmetric part is negative definite. The differentiation matrix accounts for the Dirichlet boundary condition by imposing penalty like terms. Numerical examples in 2-D show that the method is effective even where standard schemes, stable by traditional definitions fail.

  19. Computational Aspects of the h, p and h-p Versions of the Finite Element Method.

    DTIC Science & Technology

    1987-03-01

    Then we will 3,4,5,6 the dependence of the accuracy of the error fliell study the dependence between the error jjeljl and the on the computational...University, June 23-26, 1987 paper presented at the First World Congress on Computational [23] Szab6, B.A.: PROBE: Theoretical Manual, NOETIC Tech...ment agencies such as the National Bureau of Standards. 0 To be an international center of study and research for foreign students in numerical

  20. Management of giant omphaloceles: A systematic review of methods of staged surgical vs. nonoperative delayed closure.

    PubMed

    Bauman, Brent; Stephens, Daniel; Gershone, Hannah; Bongiorno, Connie; Osterholm, Erin; Acton, Robert; Hess, Donavon; Saltzman, Daniel; Segura, Bradley

    2016-10-01

    Despite the numerous methods of closure for giant omphaloceles, uncertainty persists regarding the most effective option. Our purpose was to review the literature to clarify the current methods being used and to determine superiority of either staged surgical procedures or nonoperative delayed closure in order to recommend a standard of care for the management of the giant omphalocele. Our initial database search resulted in 378 articles. After de-duplification and review, we requested 32 articles relevant to our topic that partially met our inclusion criteria. We found that 14 articles met our criteria; these 14 studies were included in our analysis. 10 studies met the inclusion criteria for nonoperative delayed closure, and 4 studies met the inclusion criteria for staged surgical management. Numerous methods for managing giant omphaloceles have been described. Many studies use topical therapy secondarily to failed surgical management. Primary nonoperative delayed management had a cumulative mortality of 21.8% vs. 23.4% in the staged surgical group. Time to initiation of full enteric feedings was lower in the nonoperative delayed group at 14.6days vs 23.5days. Despite advances in medical and surgical therapies, giant omphaloceles are still associated with a high mortality rate and numerous morbidities. In our analysis, we found that nonoperative delayed management with silver therapy was associated with lower mortality and shorter duration to full enteric feeding. We recommend that nonoperative delayed management be utilized as the primary therapy for the newborn with a giant omphalocele. Copyright © 2016. Published by Elsevier Inc.

  1. Comparative analysis of storage conditions and homogenization methods for tick and flea species for identification by MALDI-TOF MS.

    PubMed

    Nebbak, A; El Hamzaoui, B; Berenger, J-M; Bitam, I; Raoult, D; Almeras, L; Parola, P

    2017-12-01

    Ticks and fleas are vectors for numerous human and animal pathogens. Controlling them, which is important in combating such diseases, requires accurate identification, to distinguish between vector and non-vector species. Recently, matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) was applied to the rapid identification of arthropods. The growth of this promising tool, however, requires guidelines to be established. To this end, standardization protocols were applied to species of Rhipicephalus sanguineus (Ixodida: Ixodidae) Latreille and Ctenocephalides felis felis (Siphonaptera: Pulicidae) Bouché, including the automation of sample homogenization using two homogenizer devices, and varied sample preservation modes for a period of 1-6 months. The MS spectra were then compared with those obtained from manual pestle grinding, the standard homogenization method. Both automated methods generated intense, reproducible MS spectra from fresh specimens. Frozen storage methods appeared to represent the best preservation mode, for up to 6 months, while storage in ethanol is also possible, with some caveats for tick specimens. Carnoy's buffer, however, was shown to be less compatible with MS analysis for the purpose of identifying ticks or fleas. These standard protocols for MALDI-TOF MS arthropod identification should be complemented by additional MS spectrum quality controls, to generalize their use in monitoring arthropods of medical interest. © 2017 The Royal Entomological Society.

  2. A novel transmitter IQ imbalance and phase noise suppression method utilizing pilots in PDM CO-OFDM system

    NASA Astrophysics Data System (ADS)

    Zhang, Haoyuan; Ma, Xiurong; Li, Pengru

    2018-04-01

    In this paper, we develop a novel pilot structure to suppress transmitter in-phase and quadrature (Tx IQ) imbalance, phase noise and channel distortion for polarization division multiplexed (PDM) coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. Compared with the conventional approach, our method not only significantly improves the system tolerance of IQ imbalance as well as phase noise, but also provides higher transmission speed. Numerical simulations of PDM CO-OFDM system is used to validate the theoretical analysis under the simulation conditions: the amplitude mismatch 3 dB, the phase mismatch 15°, the transmission bit rate 100 Gb/s and 560 km standard signal-mode fiber transmission. Moreover, the proposed method is 63% less complex than the compared method.

  3. Standards and Methodologies for Characterizing Radiobiological Impact of High-Z Nanoparticles

    PubMed Central

    Subiel, Anna; Ashmore, Reece; Schettino, Giuseppe

    2016-01-01

    Research on the application of high-Z nanoparticles (NPs) in cancer treatment and diagnosis has recently been the subject of growing interest, with much promise being shown with regards to a potential transition into clinical practice. In spite of numerous publications related to the development and application of nanoparticles for use with ionizing radiation, the literature is lacking coherent and systematic experimental approaches to fully evaluate the radiobiological effectiveness of NPs, validate mechanistic models and allow direct comparison of the studies undertaken by various research groups. The lack of standards and established methodology is commonly recognised as a major obstacle for the transition of innovative research ideas into clinical practice. This review provides a comprehensive overview of radiobiological techniques and quantification methods used in in vitro studies on high-Z nanoparticles and aims to provide recommendations for future standardization for NP-mediated radiation research. PMID:27446499

  4. Radio frequency electromagnetic field compliance assessment of multi-band and MIMO equipped radio base stations.

    PubMed

    Thors, Björn; Thielens, Arno; Fridén, Jonas; Colombi, Davide; Törnevik, Christer; Vermeeren, Günter; Martens, Luc; Joseph, Wout

    2014-05-01

    In this paper, different methods for practical numerical radio frequency exposure compliance assessments of radio base station products were investigated. Both multi-band base station antennas and antennas designed for multiple input multiple output (MIMO) transmission schemes were considered. For the multi-band case, various standardized assessment methods were evaluated in terms of resulting compliance distance with respect to the reference levels and basic restrictions of the International Commission on Non-Ionizing Radiation Protection. Both single frequency and multiple frequency (cumulative) compliance distances were determined using numerical simulations for a mobile communication base station antenna transmitting in four frequency bands between 800 and 2600 MHz. The assessments were conducted in terms of root-mean-squared electromagnetic fields, whole-body averaged specific absorption rate (SAR) and peak 10 g averaged SAR. In general, assessments based on peak field strengths were found to be less computationally intensive, but lead to larger compliance distances than spatial averaging of electromagnetic fields used in combination with localized SAR assessments. For adult exposure, the results indicated that even shorter compliance distances were obtained by using assessments based on localized and whole-body SAR. Numerical simulations, using base station products employing MIMO transmission schemes, were performed as well and were in agreement with reference measurements. The applicability of various field combination methods for correlated exposure was investigated, and best estimate methods were proposed. Our results showed that field combining methods generally considered as conservative could be used to efficiently assess compliance boundary dimensions of single- and dual-polarized multicolumn base station antennas with only minor increases in compliance distances. © 2014 Wiley Periodicals, Inc.

  5. Streaming Potential Modeling to Understand the Identification of Hydraulically Active Fractures and Fracture-Matrix Fluid Interactions Using the Self-Potential Method

    NASA Astrophysics Data System (ADS)

    Jougnot, D.; Roubinet, D.; Linde, N.; Irving, J.

    2016-12-01

    Quantifying fluid flow in fractured media is a critical challenge in a wide variety of research fields and applications. To this end, geophysics offers a variety of tools that can provide important information on subsurface physical properties in a noninvasive manner. Most geophysical techniques infer fluid flow by data or model differencing in time or space (i.e., they are not directly sensitive to flow occurring at the time of the measurements). An exception is the self-potential (SP) method. When water flows in the subsurface, an excess of charge in the pore water that counterbalances electric charges at the mineral-pore water interface gives rise to a streaming current and an associated streaming potential. The latter can be measured with the SP technique, meaning that the method is directly sensitive to fluid flow. Whereas numerous field experiments suggest that the SP method may allow for the detection of hydraulically active fractures, suitable tools for numerically modeling streaming potentials in fractured media do not exist. Here, we present a highly efficient two-dimensional discrete-dual-porosity approach for solving the fluid-flow and associated self-potential problems in fractured domains. Our approach is specifically designed for complex fracture networks that cannot be investigated using standard numerical methods due to computational limitations. We then simulate SP signals associated with pumping conditions for a number of examples to show that (i) accounting for matrix fluid flow is essential for accurate SP modeling and (ii) the sensitivity of SP to hydraulically active fractures is intimately linked with fracture-matrix fluid interactions. This implies that fractures associated with strong SP amplitudes are likely to be hydraulically conductive, attracting fluid flow from the surrounding matrix.

  6. Cantilever spring constant calibration using laser Doppler vibrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohler, Benjamin

    2007-06-15

    Uncertainty in cantilever spring constants is a critical issue in atomic force microscopy (AFM) force measurements. Though numerous methods exist for calibrating cantilever spring constants, the accuracy of these methods can be limited by both the physical models themselves as well as uncertainties in their experimental implementation. Here we report the results from two of the most common calibration methods, the thermal tune method and the Sader method. These were implemented on a standard AFM system as well as using laser Doppler vibrometry (LDV). Using LDV eliminates some uncertainties associated with optical lever detection on an AFM. It also offersmore » considerably higher signal to noise deflection measurements. We find that AFM and LDV result in similar uncertainty in the calibrated spring constants, about 5%, using either the thermal tune or Sader methods provided that certain limitations of the methods and instrumentation are observed.« less

  7. determination of sex in south african blacks by discriminant function analysis of mandibular linear dimensions : A preliminary investigation using the zulu local population.

    PubMed

    Franklin, Daniel; O'Higgins, Paul; Oxnard, Charles E; Dadour, Ian

    2006-12-01

    The determination of sex is a critical component in forensic anthropological investigation. The literature attests to numerous metrical standards, each utilizing diffetent skeletal elements, for sex determination in South A frican Blacks. Metrical standards are popular because they provide a high degree of expected accuracy and are less error-prone than subjective nonmetric visual techniques. We note, however, that there appears to be no established metric mandible discriminant function standards for sex determination in this population.We report here on a preliminary investigation designed to evaluate whether the mandible is a practical element for sex determination in South African Blacks. The sample analyzed comprises 40 nonpathological Zulu individuals drawn from the R.A. Dart Collection. Ten linear measurements, obtained from mathematically trans-formed three-dimensional landmark data, are analyzed using basic univariate statistics and discriminant function analyses. Seven of the 10 measurements examined are found to be sexually dimorphic; the dimensions of the ramus are most dimorphic. The sex classification accuracy of the discriminant functions ranged from 72.5 to 87.5% for the univariate method, 92.5% for the stepwise method, and 57.5 to 95% for the direct method. We conclude that the mandible is an extremely useful element for sex determination in this population.

  8. Cutibacterium acnes molecular typing: time to standardize the method.

    PubMed

    Dagnelie, M-A; Khammari, A; Dréno, B; Corvec, S

    2018-03-12

    The Gram-positive, anaerobic/aerotolerant bacterium Cutibacterium acnes is a commensal of healthy human skin; it is subdivided into six main phylogenetic groups or phylotypes: IA1, IA2, IB, IC, II and III. To decipher how far specific subgroups of C. acnes are involved in disease physiopathology, different molecular typing methods have been developed to identify these subgroups: i.e. phylotypes, clonal complexes, and types defined by single-locus sequence typing (SLST). However, as several molecular typing methods have been developed over the last decade, it has become a difficult task to compare the results from one article to another. Based on the scientific literature, the aim of this narrative review is to propose a standardized method to perform molecular typing of C. acnes, according to the degree of resolution needed (phylotypes, clonal complexes, or SLST types). We discuss the existing different typing methods from a critical point of view, emphasizing their advantages and drawbacks, and we identify the most frequently used methods. We propose a consensus algorithm according to the needed phylogeny resolution level. We first propose to use multiplex PCR for phylotype identification, MLST9 for clonal complex determination, and SLST for phylogeny investigation including numerous isolates. There is an obvious need to create a consensus about molecular typing methods for C. acnes. This standardization will facilitate the comparison of results between one article and another, and also the interpretation of clinical data. Copyright © 2018 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  9. Reusable design: A proposed approach to Public Health Informatics system design

    PubMed Central

    2011-01-01

    Background Since it was first defined in 1995, Public Health Informatics (PHI) has become a recognized discipline, with a research agenda, defined domain-specific competencies and a specialized corpus of technical knowledge. Information systems form a cornerstone of PHI research and implementation, representing significant progress for the nascent field. However, PHI does not advocate or incorporate standard, domain-appropriate design methods for implementing public health information systems. Reusable design is generalized design advice that can be reused in a range of similar contexts. We propose that PHI create and reuse information design knowledge by taking a systems approach that incorporates design methods from the disciplines of Human-Computer Interaction, Interaction Design and other related disciplines. Discussion Although PHI operates in a domain with unique characteristics, many design problems in public health correspond to classic design problems, suggesting that existing design methods and solution approaches are applicable to the design of public health information systems. Among the numerous methodological frameworks used in other disciplines, we identify scenario-based design and participatory design as two widely-employed methodologies that are appropriate for adoption as PHI standards. We make the case that these methods show promise to create reusable design knowledge in PHI. Summary We propose the formalization of a set of standard design methods within PHI that can be used to pursue a strategy of design knowledge creation and reuse for cost-effective, interoperable public health information systems. We suggest that all public health informaticians should be able to use these design methods and the methods should be incorporated into PHI training. PMID:21333000

  10. Hidden Attractors in Dynamical Systems. From Hidden Oscillations in Hilbert-Kolmogorov Aizerman, and Kalman Problems to Hidden Chaotic Attractor in Chua Circuits

    NASA Astrophysics Data System (ADS)

    Leonov, G. A.; Kuznetsov, N. V.

    From a computational point of view, in nonlinear dynamical systems, attractors can be regarded as self-excited and hidden attractors. Self-excited attractors can be localized numerically by a standard computational procedure, in which after a transient process a trajectory, starting from a point of unstable manifold in a neighborhood of equilibrium, reaches a state of oscillation, therefore one can easily identify it. In contrast, for a hidden attractor, a basin of attraction does not intersect with small neighborhoods of equilibria. While classical attractors are self-excited, attractors can therefore be obtained numerically by the standard computational procedure. For localization of hidden attractors it is necessary to develop special procedures, since there are no similar transient processes leading to such attractors. At first, the problem of investigating hidden oscillations arose in the second part of Hilbert's 16th problem (1900). The first nontrivial results were obtained in Bautin's works, which were devoted to constructing nested limit cycles in quadratic systems, that showed the necessity of studying hidden oscillations for solving this problem. Later, the problem of analyzing hidden oscillations arose from engineering problems in automatic control. In the 50-60s of the last century, the investigations of widely known Markus-Yamabe's, Aizerman's, and Kalman's conjectures on absolute stability have led to the finding of hidden oscillations in automatic control systems with a unique stable stationary point. In 1961, Gubar revealed a gap in Kapranov's work on phase locked-loops (PLL) and showed the possibility of the existence of hidden oscillations in PLL. At the end of the last century, the difficulties in analyzing hidden oscillations arose in simulations of drilling systems and aircraft's control systems (anti-windup) which caused crashes. Further investigations on hidden oscillations were greatly encouraged by the present authors' discovery, in 2010 (for the first time), of chaotic hidden attractor in Chua's circuit. This survey is dedicated to efficient analytical-numerical methods for the study of hidden oscillations. Here, an attempt is made to reflect the current trends in the synthesis of analytical and numerical methods.

  11. 3D early embryogenesis image filtering by nonlinear partial differential equations.

    PubMed

    Krivá, Z; Mikula, K; Peyriéras, N; Rizzi, B; Sarti, A; Stasová, O

    2010-08-01

    We present nonlinear diffusion equations, numerical schemes to solve them and their application for filtering 3D images obtained from laser scanning microscopy (LSM) of living zebrafish embryos, with a goal to identify the optimal filtering method and its parameters. In the large scale applications dealing with analysis of 3D+time embryogenesis images, an important objective is a correct detection of the number and position of cell nuclei yielding the spatio-temporal cell lineage tree of embryogenesis. The filtering is the first and necessary step of the image analysis chain and must lead to correct results, removing the noise, sharpening the nuclei edges and correcting the acquisition errors related to spuriously connected subregions. In this paper we study such properties for the regularized Perona-Malik model and for the generalized mean curvature flow equations in the level-set formulation. A comparison with other nonlinear diffusion filters, like tensor anisotropic diffusion and Beltrami flow, is also included. All numerical schemes are based on the same discretization principles, i.e. finite volume method in space and semi-implicit scheme in time, for solving nonlinear partial differential equations. These numerical schemes are unconditionally stable, fast and naturally parallelizable. The filtering results are evaluated and compared first using the Mean Hausdorff distance between a gold standard and different isosurfaces of original and filtered data. Then, the number of isosurface connected components in a region of interest (ROI) detected in original and after the filtering is compared with the corresponding correct number of nuclei in the gold standard. Such analysis proves the robustness and reliability of the edge preserving nonlinear diffusion filtering for this type of data and lead to finding the optimal filtering parameters for the studied models and numerical schemes. Further comparisons consist in ability of splitting the very close objects which are artificially connected due to acquisition error intrinsically linked to physics of LSM. In all studied aspects it turned out that the nonlinear diffusion filter which is called geodesic mean curvature flow (GMCF) has the best performance. Copyright 2010 Elsevier B.V. All rights reserved.

  12. Fourth order difference methods for hyperbolic IBVP's

    NASA Technical Reports Server (NTRS)

    Gustafsson, Bertil; Olsson, Pelle

    1994-01-01

    Fourth order difference approximations of initial-boundary value problems for hyperbolic partial differential equations are considered. We use the method of lines approach with both explicit and compact implicit difference operators in space. The explicit operator satisfies an energy estimate leading to strict stability. For the implicit operator we develop boundary conditions and give a complete proof of strong stability using the Laplace transform technique. We also present numerical experiments for the linear advection equation and Burgers' equation with discontinuities in the solution or in its derivative. The first equation is used for modeling contact discontinuities in fluid dynamics, the second one for modeling shocks and rarefaction waves. The time discretization is done with a third order Runge-Kutta TVD method. For solutions with discontinuities in the solution itself we add a filter based on second order viscosity. In case of the non-linear Burger's equation we use a flux splitting technique that results in an energy estimate for certain different approximations, in which case also an entropy condition is fulfilled. In particular we shall demonstrate that the unsplit conservative form produces a non-physical shock instead of the physically correct rarefaction wave. In the numerical experiments we compare our fourth order methods with a standard second order one and with a third order TVD-method. The results show that the fourth order methods are the only ones that give good results for all the considered test problems.

  13. Summary of tracking and identification methods

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Yang, Chun; Kadar, Ivan

    2014-06-01

    Over the last two decades, many solutions have arisen to combine target tracking estimation with classification methods. Target tracking includes developments from linear to non-linear and Gaussian to non-Gaussian processing. Pattern recognition includes detection, classification, recognition, and identification methods. Integrating tracking and pattern recognition has resulted in numerous approaches and this paper seeks to organize the various approaches. We discuss the terminology so as to have a common framework for various standards such as the NATO STANAG 4162 - Identification Data Combining Process. In a use case, we provide a comparative example highlighting that location information (as an example) with additional mission objectives from geographical, human, social, cultural, and behavioral modeling is needed to determine identification as classification alone does not allow determining identification or intent.

  14. Validation of a Three-Dimensional Method for Counting and Sizing Podocytes in Whole Glomeruli

    PubMed Central

    van der Wolde, James W.; Schulze, Keith E.; Short, Kieran M.; Wong, Milagros N.; Bensley, Jonathan G.; Cullen-McEwen, Luise A.; Caruana, Georgina; Hokke, Stacey N.; Li, Jinhua; Firth, Stephen D.; Harper, Ian S.; Nikolic-Paterson, David J.; Bertram, John F.

    2016-01-01

    Podocyte depletion is sufficient for the development of numerous glomerular diseases and can be absolute (loss of podocytes) or relative (reduced number of podocytes per volume of glomerulus). Commonly used methods to quantify podocyte depletion introduce bias, whereas gold standard stereologic methodologies are time consuming and impractical. We developed a novel approach for assessing podocyte depletion in whole glomeruli that combines immunofluorescence, optical clearing, confocal microscopy, and three-dimensional analysis. We validated this method in a transgenic mouse model of selective podocyte depletion, in which we determined dose-dependent alterations in several quantitative indices of podocyte depletion. This new approach provides a quantitative tool for the comprehensive and time-efficient analysis of podocyte depletion in whole glomeruli. PMID:26975438

  15. Sensitivity analysis and optimization method for the fabrication of one-dimensional beam-splitting phase gratings

    PubMed Central

    Pacheco, Shaun; Brand, Jonathan F.; Zaverton, Melissa; Milster, Tom; Liang, Rongguang

    2015-01-01

    A method to design one-dimensional beam-spitting phase gratings with low sensitivity to fabrication errors is described. The method optimizes the phase function of a grating by minimizing the integrated variance of the energy of each output beam over a range of fabrication errors. Numerical results for three 1x9 beam splitting phase gratings are given. Two optimized gratings with low sensitivity to fabrication errors were compared with a grating designed for optimal efficiency. These three gratings were fabricated using gray-scale photolithography. The standard deviation of the 9 outgoing beam energies in the optimized gratings were 2.3 and 3.4 times lower than the optimal efficiency grating. PMID:25969268

  16. An improved K-means clustering method for cDNA microarray image segmentation.

    PubMed

    Wang, T N; Li, T J; Shao, G F; Wu, S X

    2015-07-14

    Microarray technology is a powerful tool for human genetic research and other biomedical applications. Numerous improvements to the standard K-means algorithm have been carried out to complete the image segmentation step. However, most of the previous studies classify the image into two clusters. In this paper, we propose a novel K-means algorithm, which first classifies the image into three clusters, and then one of the three clusters is divided as the background region and the other two clusters, as the foreground region. The proposed method was evaluated on six different data sets. The analyses of accuracy, efficiency, expression values, special gene spots, and noise images demonstrate the effectiveness of our method in improving the segmentation quality.

  17. A high-throughput microtiter plate based method for the determination of peracetic acid and hydrogen peroxide.

    PubMed

    Putt, Karson S; Pugh, Randall B

    2013-01-01

    Peracetic acid is gaining usage in numerous industries who have found a myriad of uses for its antimicrobial activity. However, rapid high throughput quantitation methods for peracetic acid and hydrogen peroxide are lacking. Herein, we describe the development of a high-throughput microtiter plate based assay based upon the well known and trusted titration chemical reactions. The adaptation of these titration chemistries to rapid plate based absorbance methods for the sequential determination of hydrogen peroxide specifically and the total amount of peroxides present in solution are described. The results of these methods were compared to those of a standard titration and found to be in good agreement. Additionally, the utility of the developed method is demonstrated through the generation of degradation curves of both peracetic acid and hydrogen peroxide in a mixed solution.

  18. A High-Throughput Microtiter Plate Based Method for the Determination of Peracetic Acid and Hydrogen Peroxide

    PubMed Central

    Putt, Karson S.; Pugh, Randall B.

    2013-01-01

    Peracetic acid is gaining usage in numerous industries who have found a myriad of uses for its antimicrobial activity. However, rapid high throughput quantitation methods for peracetic acid and hydrogen peroxide are lacking. Herein, we describe the development of a high-throughput microtiter plate based assay based upon the well known and trusted titration chemical reactions. The adaptation of these titration chemistries to rapid plate based absorbance methods for the sequential determination of hydrogen peroxide specifically and the total amount of peroxides present in solution are described. The results of these methods were compared to those of a standard titration and found to be in good agreement. Additionally, the utility of the developed method is demonstrated through the generation of degradation curves of both peracetic acid and hydrogen peroxide in a mixed solution. PMID:24260173

  19. The Evolution of Strain Typing in the Mycobacterium tuberculosis Complex.

    PubMed

    Merker, Matthias; Kohl, Thomas A; Niemann, Stefan; Supply, Philip

    2017-01-01

    Tuberculosis (TB) is a contagious disease with a complex epidemiology. Therefore, molecular typing (genotyping) of Mycobacterium tuberculosis complex (MTBC) strains is of primary importance to effectively guide outbreak investigations, define transmission dynamics and assist global epidemiological surveillance of the disease. Large-scale genotyping is also needed to get better insights into the biological diversity and the evolution of the pathogen. Thanks to its shorter turnaround and simple numerical nomenclature system, mycobacterial interspersed repetitive unit-variable-number tandem repeat (MIRU-VNTR) typing, based on 24 standardized plus 4 hypervariable loci, optionally combined with spoligotyping, has replaced IS6110 DNA fingerprinting over the last decade as a gold standard among classical strain typing methods for many applications. With the continuous progress and decreasing costs of next-generation sequencing (NGS) technologies, typing based on whole genome sequencing (WGS) is now increasingly performed for near complete exploitation of the available genetic information. However, some important challenges remain such as the lack of standardization of WGS analysis pipelines, the need of databases for sharing WGS data at a global level, and a better understanding of the relevant genomic distances for defining clusters of recent TB transmission in different epidemiological contexts. This chapter provides an overview of the evolution of genotyping methods over the last three decades, which culminated with the development of WGS-based methods. It addresses the relative advantages and limitations of these techniques, indicates current challenges and potential directions for facilitating standardization of WGS-based typing, and provides suggestions on what method to use depending on the specific research question.

  20. A new flux conserving Newton's method scheme for the two-dimensional, steady Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Scott, James R.; Chang, Sin-Chung

    1993-01-01

    A new numerical method is developed for the solution of the two-dimensional, steady Navier-Stokes equations. The method that is presented differs in significant ways from the established numerical methods for solving the Navier-Stokes equations. The major differences are described. First, the focus of the present method is on satisfying flux conservation in an integral formulation, rather than on simulating conservation laws in their differential form. Second, the present approach provides a unified treatment of the dependent variables and their unknown derivatives. All are treated as unknowns together to be solved for through simulating local and global flux conservation. Third, fluxes are balanced at cell interfaces without the use of interpolation or flux limiters. Fourth, flux conservation is achieved through the use of discrete regions known as conservation elements and solution elements. These elements are not the same as the standard control volumes used in the finite volume method. Fifth, the discrete approximation obtained on each solution element is a functional solution of both the integral and differential form of the Navier-Stokes equations. Finally, the method that is presented is a highly localized approach in which the coupling to nearby cells is only in one direction for each spatial coordinate, and involves only the immediately adjacent cells. A general third-order formulation for the steady, compressible Navier-Stokes equations is presented, and then a Newton's method scheme is developed for the solution of incompressible, low Reynolds number channel flow. It is shown that the Jacobian matrix is nearly block diagonal if the nonlinear system of discrete equations is arranged approximately and a proper pivoting strategy is used. Numerical results are presented for Reynolds numbers of 100, 1000, and 2000. Finally, it is shown that the present scheme can resolve the developing channel flow boundary layer using as few as six to ten cells per channel width, depending on the Reynolds number.

  1. A positivity-preserving, implicit defect-correction multigrid method for turbulent combustion

    NASA Astrophysics Data System (ADS)

    Wasserman, M.; Mor-Yossef, Y.; Greenberg, J. B.

    2016-07-01

    A novel, robust multigrid method for the simulation of turbulent and chemically reacting flows is developed. A survey of previous attempts at implementing multigrid for the problems at hand indicated extensive use of artificial stabilization to overcome numerical instability arising from non-linearity of turbulence and chemistry model source-terms, small-scale physics of combustion, and loss of positivity. These issues are addressed in the current work. The highly stiff Reynolds-averaged Navier-Stokes (RANS) equations, coupled with turbulence and finite-rate chemical kinetics models, are integrated in time using the unconditionally positive-convergent (UPC) implicit method. The scheme is successfully extended in this work for use with chemical kinetics models, in a fully-coupled multigrid (FC-MG) framework. To tackle the degraded performance of multigrid methods for chemically reacting flows, two major modifications are introduced with respect to the basic, Full Approximation Storage (FAS) approach. First, a novel prolongation operator that is based on logarithmic variables is proposed to prevent loss of positivity due to coarse-grid corrections. Together with the extended UPC implicit scheme, the positivity-preserving prolongation operator guarantees unconditional positivity of turbulence quantities and species mass fractions throughout the multigrid cycle. Second, to improve the coarse-grid-correction obtained in localized regions of high chemical activity, a modified defect correction procedure is devised, and successfully applied for the first time to simulate turbulent, combusting flows. The proposed modifications to the standard multigrid algorithm create a well-rounded and robust numerical method that provides accelerated convergence, while unconditionally preserving the positivity of model equation variables. Numerical simulations of various flows involving premixed combustion demonstrate that the proposed MG method increases the efficiency by a factor of up to eight times with respect to an equivalent single-grid method, and by two times with respect to an artificially-stabilized MG method.

  2. 40 CFR 94.8 - Exhaust emission standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) Engines fueled with alcohol fuel shall comply with THCE+NOX standards that are numerically equivalent to... advance by the Administrator. (g) Standards for alternative fuels. The standards described in this section apply to compression-ignition engines, irrespective of fuel, with the following two exceptions for...

  3. A spectral, quasi-cylindrical and dispersion-free Particle-In-Cell algorithm

    DOE PAGES

    Lehe, Remi; Kirchen, Manuel; Andriyash, Igor A.; ...

    2016-02-17

    We propose a spectral Particle-In-Cell (PIC) algorithm that is based on the combination of a Hankel transform and a Fourier transform. For physical problems that have close-to-cylindrical symmetry, this algorithm can be much faster than full 3D PIC algorithms. In addition, unlike standard finite-difference PIC codes, the proposed algorithm is free of spurious numerical dispersion, in vacuum. This algorithm is benchmarked in several situations that are of interest for laser-plasma interactions. These benchmarks show that it avoids a number of numerical artifacts, that would otherwise affect the physics in a standard PIC algorithm - including the zero-order numerical Cherenkov effect.

  4. Review of Real-Time 3-Dimensional Image Guided Radiation Therapy on Standard-Equipped Cancer Radiation Therapy Systems: Are We at the Tipping Point for the Era of Real-Time Radiation Therapy?

    PubMed

    Keall, Paul J; Nguyen, Doan Trang; O'Brien, Ricky; Zhang, Pengpeng; Happersett, Laura; Bertholet, Jenny; Poulsen, Per R

    2018-04-14

    To review real-time 3-dimensional (3D) image guided radiation therapy (IGRT) on standard-equipped cancer radiation therapy systems, focusing on clinically implemented solutions. Three groups in 3 continents have clinically implemented novel real-time 3D IGRT solutions on standard-equipped linear accelerators. These technologies encompass kilovoltage, combined megavoltage-kilovoltage, and combined kilovoltage-optical imaging. The cancer sites treated span pelvic and abdominal tumors for which respiratory motion is present. For each method the 3D-measured motion during treatment is reported. After treatment, dose reconstruction was used to assess the treatment quality in the presence of motion with and without real-time 3D IGRT. The geometric accuracy was quantified through phantom experiments. A literature search was conducted to identify additional real-time 3D IGRT methods that could be clinically implemented in the near future. The real-time 3D IGRT methods were successfully clinically implemented and have been used to treat more than 200 patients. Systematic target position shifts were observed using all 3 methods. Dose reconstruction demonstrated that the delivered dose is closer to the planned dose with real-time 3D IGRT than without real-time 3D IGRT. In addition, compromised target dose coverage and variable normal tissue doses were found without real-time 3D IGRT. The geometric accuracy results with real-time 3D IGRT had a mean error of <0.5 mm and a standard deviation of <1.1 mm. Numerous additional articles exist that describe real-time 3D IGRT methods using standard-equipped radiation therapy systems that could also be clinically implemented. Multiple clinical implementations of real-time 3D IGRT on standard-equipped cancer radiation therapy systems have been demonstrated. Many more approaches that could be implemented were identified. These solutions provide a pathway for the broader adoption of methods to make radiation therapy more accurate, impacting tumor and normal tissue dose, margins, and ultimately patient outcomes. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Ling; Zhao, Haihua; Kim, Seung Jun

    In this study, the classical Welander’s oscillatory natural circulation problem is investigated using high-order numerical methods. As originally studied by Welander, the fluid motion in a differentially heated fluid loop can exhibit stable, weakly instable, and strongly instable modes. A theoretical stability map has also been originally derived from the stability analysis. Numerical results obtained in this paper show very good agreement with Welander’s theoretical derivations. For stable cases, numerical results from both the high-order and low-order numerical methods agree well with the non-dimensional flow rate analytically derived. The high-order numerical methods give much less numerical errors compared to themore » low-order methods. For stability analysis, the high-order numerical methods could perfectly predict the stability map, while the low-order numerical methods failed to do so. For all theoretically unstable cases, the low-order methods predicted them to be stable. The result obtained in this paper is a strong evidence to show the benefits of using high-order numerical methods over the low-order ones, when they are applied to simulate natural circulation phenomenon that has already gain increasing interests in many future nuclear reactor designs.« less

  6. Approach to developing numeric water quality criteria for ...

    EPA Pesticide Factsheets

    Human activities on land increase nutrient loads to coastal waters, which can increase phytoplankton production and biomass and potentially cause harmful ecological effects. States can adopt numeric water quality criteria into their water quality standards to protect the designated uses of their coastal waters from eutrophication impacts. The first objective of this study was to provide an approach for developing numeric water quality criteria for coastal waters based on archived SeaWiFS ocean color satellite data. The second objective was to develop an approach for transferring water quality criteria assessments to newer ocean color satellites such as MODIS and MERIS. Spatial and temporal measures of SeaWiFS, MODIS, and MERIS chlorophyll-a (ChlRS-a, mg m-3) were resolved across Florida’s coastal waters between 1998 and 2009. Annual geometric means of SeaWiFS ChlRS-a were evaluated to determine a quantitative reference baseline from the 90th percentile of the annual geometric means. A method for transferring to multiple ocean color sensors was implemented with SeaWiFS as the reference instrument. The ChlRS-a annual geometric means for each coastal segment from MODIS and MERIS were regressed against SeaWiFS to provide a similar response among all three satellites. Standardization factors for each coastal segment were calculated based on differences between 90th percentiles from SeaWiFS to MODIS and SeaWiFS to MERIS. This transfer approach allowed for futu

  7. Radiative transfer calculated from a Markov chain formalism

    NASA Technical Reports Server (NTRS)

    Esposito, L. W.; House, L. L.

    1978-01-01

    The theory of Markov chains is used to formulate the radiative transport problem in a general way by modeling the successive interactions of a photon as a stochastic process. Under the minimal requirement that the stochastic process is a Markov chain, the determination of the diffuse reflection or transmission from a scattering atmosphere is equivalent to the solution of a system of linear equations. This treatment is mathematically equivalent to, and thus has many of the advantages of, Monte Carlo methods, but can be considerably more rapid than Monte Carlo algorithms for numerical calculations in particular applications. We have verified the speed and accuracy of this formalism for the standard problem of finding the intensity of scattered light from a homogeneous plane-parallel atmosphere with an arbitrary phase function for scattering. Accurate results over a wide range of parameters were obtained with computation times comparable to those of a standard 'doubling' routine. The generality of this formalism thus allows fast, direct solutions to problems that were previously soluble only by Monte Carlo methods. Some comparisons are made with respect to integral equation methods.

  8. Feasibility of Measuring Mean Vertical Motion for Estimating Advection. Chapter 6

    NASA Technical Reports Server (NTRS)

    Vickers, Dean; Mahrt, L.

    2005-01-01

    Numerous recent studies calculate horizontal and vertical advection terms for budget studies of net ecosystem exchange of carbon. One potential uncertainty in such studies is the estimate of mean vertical motion. This work addresses the reliability of vertical advection estimates by contrasting the vertical motion obtained from the standard practise of measuring the vertical velocity and applying a tilt correction, to the vertical motion calculated from measurements of the horizontal divergence of the flow using a network of towers. Results are compared for three different tilt correction methods. Estimates of mean vertical motion are sensitive to the choice of tilt correction method. The short-term mean (10 to 60 minutes) vertical motion based on the horizontal divergence is more realistic compared to the estimates derived from the standard practise. The divergence shows long-term mean (days to months) sinking motion at the site, apparently due to the surface roughness change. Because all the tilt correction methods rely on the assumption that the long-term mean vertical motion is zero for a given wind direction, they fail to reproduce the vertical motion based on the divergence.

  9. A well-balanced finite volume scheme for the Euler equations with gravitation. The exact preservation of hydrostatic equilibrium with arbitrary entropy stratification

    NASA Astrophysics Data System (ADS)

    Käppeli, R.; Mishra, S.

    2016-03-01

    Context. Many problems in astrophysics feature flows which are close to hydrostatic equilibrium. However, standard numerical schemes for compressible hydrodynamics may be deficient in approximating this stationary state, where the pressure gradient is nearly balanced by gravitational forces. Aims: We aim to develop a second-order well-balanced scheme for the Euler equations. The scheme is designed to mimic a discrete version of the hydrostatic balance. It therefore can resolve a discrete hydrostatic equilibrium exactly (up to machine precision) and propagate perturbations, on top of this equilibrium, very accurately. Methods: A local second-order hydrostatic equilibrium preserving pressure reconstruction is developed. Combined with a standard central gravitational source term discretization and numerical fluxes that resolve stationary contact discontinuities exactly, the well-balanced property is achieved. Results: The resulting well-balanced scheme is robust and simple enough to be very easily implemented within any existing computer code that solves time explicitly or implicitly the compressible hydrodynamics equations. We demonstrate the performance of the well-balanced scheme for several astrophysically relevant applications: wave propagation in stellar atmospheres, a toy model for core-collapse supernovae, convection in carbon shell burning, and a realistic proto-neutron star.

  10. Newton-Krylov-Schwarz: An implicit solver for CFD

    NASA Technical Reports Server (NTRS)

    Cai, Xiao-Chuan; Keyes, David E.; Venkatakrishnan, V.

    1995-01-01

    Newton-Krylov methods and Krylov-Schwarz (domain decomposition) methods have begun to become established in computational fluid dynamics (CFD) over the past decade. The former employ a Krylov method inside of Newton's method in a Jacobian-free manner, through directional differencing. The latter employ an overlapping Schwarz domain decomposition to derive a preconditioner for the Krylov accelerator that relies primarily on local information, for data-parallel concurrency. They may be composed as Newton-Krylov-Schwarz (NKS) methods, which seem particularly well suited for solving nonlinear elliptic systems in high-latency, distributed-memory environments. We give a brief description of this family of algorithms, with an emphasis on domain decomposition iterative aspects. We then describe numerical simulations with Newton-Krylov-Schwarz methods on aerodynamics applications emphasizing comparisons with a standard defect-correction approach, subdomain preconditioner consistency, subdomain preconditioner quality, and the effect of a coarse grid.

  11. Solution of a few nonlinear problems in aerodynamics by the finite elements and functional least squares methods. Ph.D. Thesis - Paris Univ.; [mathematical models of transonic flow using nonlinear equations

    NASA Technical Reports Server (NTRS)

    Periaux, J.

    1979-01-01

    The numerical simulation of the transonic flows of idealized fluids and of incompressible viscous fluids, by the nonlinear least squares methods is presented. The nonlinear equations, the boundary conditions, and the various constraints controlling the two types of flow are described. The standard iterative methods for solving a quasi elliptical nonlinear equation with partial derivatives are reviewed with emphasis placed on two examples: the fixed point method applied to the Gelder functional in the case of compressible subsonic flows and the Newton method used in the technique of decomposition of the lifting potential. The new abstract least squares method is discussed. It consists of substituting the nonlinear equation by a problem of minimization in a H to the minus 1 type Sobolev functional space.

  12. Aircraft directional stability and vertical tail design: A review of semi-empirical methods

    NASA Astrophysics Data System (ADS)

    Ciliberti, Danilo; Della Vecchia, Pierluigi; Nicolosi, Fabrizio; De Marco, Agostino

    2017-11-01

    Aircraft directional stability and control are related to vertical tail design. The safety, performance, and flight qualities of an aircraft also depend on a correct empennage sizing. Specifically, the vertical tail is responsible for the aircraft yaw stability and control. If these characteristics are not well balanced, the entire aircraft design may fail. Stability and control are often evaluated, especially in the preliminary design phase, with semi-empirical methods, which are based on the results of experimental investigations performed in the past decades, and occasionally are merged with data provided by theoretical assumptions. This paper reviews the standard semi-empirical methods usually applied in the estimation of airplane directional stability derivatives in preliminary design, highlighting the advantages and drawbacks of these approaches that were developed from wind tunnel tests performed mainly on fighter airplane configurations of the first decades of the past century, and discussing their applicability on current transport aircraft configurations. Recent investigations made by the authors have shown the limit of these methods, proving the existence of aerodynamic interference effects in sideslip conditions which are not adequately considered in classical formulations. The article continues with a concise review of the numerical methods for aerodynamics and their applicability in aircraft design, highlighting how Reynolds-Averaged Navier-Stokes (RANS) solvers are well-suited to attain reliable results in attached flow conditions, with reasonable computational times. From the results of RANS simulations on a modular model of a representative regional turboprop airplane layout, the authors have developed a modern method to evaluate the vertical tail and fuselage contributions to aircraft directional stability. The investigation on the modular model has permitted an effective analysis of the aerodynamic interference effects by moving, changing, and expanding the available airplane components. Wind tunnel tests over a wide range of airplane configurations have been used to validate the numerical approach. The comparison between the proposed method and the standard semi-empirical methods available in literature proves the reliability of the innovative approach, according to the available experimental data collected in the wind tunnel test campaign.

  13. Modeling cellular compartmentation in one-carbon metabolism

    PubMed Central

    Scotti, Marco; Stella, Lorenzo; Shearer, Emily J.; Stover, Patrick J.

    2015-01-01

    Folate-mediated one-carbon metabolism (FOCM) is associated with risk for numerous pathological states including birth defects, cancers, and chronic diseases. Although the enzymes that constitute the biological pathways have been well described and their interdependency through the shared use of folate cofactors appreciated, the biological mechanisms underlying disease etiologies remain elusive. The FOCM network is highly sensitive to nutritional status of several B-vitamins and numerous penetrant gene variants that alter network outputs, but current computational approaches do not fully capture the dynamics and stochastic noise of the system. Combining the stochastic approach with a rule-based representation will help model the intrinsic noise displayed by FOCM, address the limited flexibility of standard simulation methods for coarse-graining the FOCM-associated biochemical processes, and manage the combinatorial complexity emerging from reactions within FOCM that would otherwise be intractable. PMID:23408533

  14. Databases and coordinated research projects at the IAEA on atomic processes in plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braams, Bastiaan J.; Chung, Hyun-Kyung

    2012-05-25

    The Atomic and Molecular Data Unit at the IAEA works with a network of national data centres to encourage and coordinate production and dissemination of fundamental data for atomic, molecular and plasma-material interaction (A+M/PMI) processes that are relevant to the realization of fusion energy. The Unit maintains numerical and bibliographical databases and has started a Wiki-style knowledge base. The Unit also contributes to A+M database interface standards and provides a search engine that offers a common interface to multiple numerical A+M/PMI databases. Coordinated Research Projects (CRPs) bring together fusion energy researchers and atomic, molecular and surface physicists for joint workmore » towards the development of new data and new methods. The databases and current CRPs on A+M/PMI processes are briefly described here.« less

  15. Simple method to set up low eccentricity initial data for moving puncture simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tichy, Wolfgang; Marronetti, Pedro

    2011-01-15

    We introduce two new eccentricity measures to analyze numerical simulations. Unlike earlier definitions these eccentricity measures do not involve any free parameters which makes them easy to use. We show how relatively inexpensive grid setups can be used to estimate the eccentricity during the early inspiral phase. Furthermore, we compare standard puncture data and post-Newtonian data in ADMTT gauge. We find that both use different coordinates. Thus low eccentricity initial momentum parameters for a certain separation measured in ADMTT coordinates are hard to use in puncture data, because it is not known how the separation in puncture coordinates is relatedmore » to the separation in ADMTT coordinates. As a remedy we provide a simple approach which allows us to iterate the momentum parameters until our numerical simulations result in acceptably low eccentricities.« less

  16. Shear Resistance between Concrete-Concrete Surfaces

    NASA Astrophysics Data System (ADS)

    Kovačovic, Marek

    2013-12-01

    The application of precast beams and cast-in-situ structural members cast at different times has been typical of bridges and buildings for many years. A load-bearing frame consists of a set of prestressed precast beams supported by columns and diaphragms joined with an additionally cast slab deck. This article is focused on the theoretical and experimental analyses of the shear resistance at an interface. The first part of the paper deals with the state-of-art knowledge of the composite behaviour of concrete-concrete structures and a comparison of the numerical methods introduced in the relevant standards. In the experimental part, a set of specimens with different interface treatments was tested until failure in order to predict the composite behaviour of coupled beams. The experimental part was compared to the numerical analysis performed by means of FEM basis nonlinear software.

  17. (Machine) learning to do more with less

    NASA Astrophysics Data System (ADS)

    Cohen, Timothy; Freytsis, Marat; Ostdiek, Bryan

    2018-02-01

    Determining the best method for training a machine learning algorithm is critical to maximizing its ability to classify data. In this paper, we compare the standard "fully supervised" approach (which relies on knowledge of event-by-event truth-level labels) with a recent proposal that instead utilizes class ratios as the only discriminating information provided during training. This so-called "weakly supervised" technique has access to less information than the fully supervised method and yet is still able to yield impressive discriminating power. In addition, weak supervision seems particularly well suited to particle physics since quantum mechanics is incompatible with the notion of mapping an individual event onto any single Feynman diagram. We examine the technique in detail — both analytically and numerically — with a focus on the robustness to issues of mischaracterizing the training samples. Weakly supervised networks turn out to be remarkably insensitive to a class of systematic mismodeling. Furthermore, we demonstrate that the event level outputs for weakly versus fully supervised networks are probing different kinematics, even though the numerical quality metrics are essentially identical. This implies that it should be possible to improve the overall classification ability by combining the output from the two types of networks. For concreteness, we apply this technology to a signature of beyond the Standard Model physics to demonstrate that all these impressive features continue to hold in a scenario of relevance to the LHC. Example code is provided on GitHub.

  18. Animal Allergens and Their Presence in the Environment

    PubMed Central

    Zahradnik, Eva; Raulf, Monika

    2014-01-01

    Exposure to animal allergens is a major risk factor for sensitization and allergic diseases. Besides mites and cockroaches, the most important animal allergens are derived from mammals. Cat and dog allergies affect the general population; whereas, allergies to rodents or cattle is an occupational problem. Exposure to animal allergens is not limited to direct contact to animals. Based on their aerodynamic properties, mammalian allergens easily become airborne, attach to clothing and hair, and can be spread from one environment to another. For example, the major cat allergen Fel d 1 was frequently found in homes without pets and in public buildings, including schools, day-care centers, and hospitals. Allergen concentrations in a particular environment showed high variability depending on numerous factors. Assessment of allergen exposure levels is a stepwise process that involves dust collection, allergen quantification, and data analysis. Whereas a number of different dust sampling strategies are used, ELISA assays have prevailed in the last years as the standard technique for quantification of allergen concentrations. This review focuses on allergens arising from domestic, farm, and laboratory animals and describes the ubiquity of mammalian allergens in the human environment. It includes an overview of exposure assessment studies carried out in different indoor settings (homes, schools, workplaces) using numerous sampling and analytical methods and summarizes significant factors influencing exposure levels. However, methodological differences among studies have contributed to the variability of the findings and make comparisons between studies difficult. Therefore, a general standardization of methods is needed and recommended. PMID:24624129

  19. Explicit methods in extended phase space for inseparable Hamiltonian problems

    NASA Astrophysics Data System (ADS)

    Pihajoki, Pauli

    2015-03-01

    We present a method for explicit leapfrog integration of inseparable Hamiltonian systems by means of an extended phase space. A suitably defined new Hamiltonian on the extended phase space leads to equations of motion that can be numerically integrated by standard symplectic leapfrog (splitting) methods. When the leapfrog is combined with coordinate mixing transformations, the resulting algorithm shows good long term stability and error behaviour. We extend the method to non-Hamiltonian problems as well, and investigate optimal methods of projecting the extended phase space back to original dimension. Finally, we apply the methods to a Hamiltonian problem of geodesics in a curved space, and a non-Hamiltonian problem of a forced non-linear oscillator. We compare the performance of the methods to a general purpose differential equation solver LSODE, and the implicit midpoint method, a symplectic one-step method. We find the extended phase space methods to compare favorably to both for the Hamiltonian problem, and to the implicit midpoint method in the case of the non-linear oscillator.

  20. 40 CFR 449.11 - New source performance standards (NSPS).

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... percent of available ADF. (2) Numerical effluent limitation. The new source must achieve the performance... the numeric limitations for ammonia in Table III, prior to any dilution or commingling with any non...

Top