Science.gov

Sample records for accurate numerical methods

  1. The development of accurate and efficient methods of numerical quadrature

    NASA Technical Reports Server (NTRS)

    Feagin, T.

    1973-01-01

    Some new methods for performing numerical quadrature of an integrable function over a finite interval are described. Each method provides a sequence of approximations of increasing order to the value of the integral. Each approximation makes use of all previously computed values of the integrand. The points at which new values of the integrand are computed are selected in such a way that the order of the approximation is maximized. The methods are compared with the quadrature methods of Clenshaw and Curtis, Gauss, Patterson, and Romberg using several examples.

  2. Efficient and accurate numerical methods for the Klein-Gordon-Schroedinger equations

    SciTech Connect

    Bao, Weizhu . E-mail: bao@math.nus.edu.sg; Yang, Li . E-mail: yangli@nus.edu.sg

    2007-08-10

    In this paper, we present efficient, unconditionally stable and accurate numerical methods for approximations of the Klein-Gordon-Schroedinger (KGS) equations with/without damping terms. The key features of our methods are based on: (i) the application of a time-splitting spectral discretization for a Schroedinger-type equation in KGS (ii) the utilization of Fourier pseudospectral discretization for spatial derivatives in the Klein-Gordon equation in KGS (iii) the adoption of solving the ordinary differential equations (ODEs) in phase space analytically under appropriate chosen transmission conditions between different time intervals or applying Crank-Nicolson/leap-frog for linear/nonlinear terms for time derivatives. The numerical methods are either explicit or implicit but can be solved explicitly, unconditionally stable, and of spectral accuracy in space and second-order accuracy in time. Moreover, they are time reversible and time transverse invariant when there is no damping terms in KGS, conserve (or keep the same decay rate of) the wave energy as that in KGS without (or with a linear) damping term, keep the same dynamics of the mean value of the meson field, and give exact results for the plane-wave solution. Extensive numerical tests are presented to confirm the above properties of our numerical methods for KGS. Finally, the methods are applied to study solitary-wave collisions in one dimension (1D), as well as dynamics of a 2D problem in KGS.

  3. Earthquake Rupture Dynamics using Adaptive Mesh Refinement and High-Order Accurate Numerical Methods

    NASA Astrophysics Data System (ADS)

    Kozdon, J. E.; Wilcox, L.

    2013-12-01

    Our goal is to develop scalable and adaptive (spatial and temporal) numerical methods for coupled, multiphysics problems using high-order accurate numerical methods. To do so, we are developing an opensource, parallel library known as bfam (available at http://bfam.in). The first application to be developed on top of bfam is an earthquake rupture dynamics solver using high-order discontinuous Galerkin methods and summation-by-parts finite difference methods. In earthquake rupture dynamics, wave propagation in the Earth's crust is coupled to frictional sliding on fault interfaces. This coupling is two-way, required the simultaneous simulation of both processes. The use of laboratory-measured friction parameters requires near-fault resolution that is 4-5 orders of magnitude higher than that needed to resolve the frequencies of interest in the volume. This, along with earlier simulations using a low-order, finite volume based adaptive mesh refinement framework, suggest that adaptive mesh refinement is ideally suited for this problem. The use of high-order methods is motivated by the high level of resolution required off the fault in earlier the low-order finite volume simulations; we believe this need for resolution is a result of the excessive numerical dissipation of low-order methods. In bfam spatial adaptivity is handled using the p4est library and temporal adaptivity will be accomplished through local time stepping. In this presentation we will present the guiding principles behind the library as well as verification of code against the Southern California Earthquake Center dynamic rupture code validation test problems.

  4. Numerical system utilising a Monte Carlo calculation method for accurate dose assessment in radiation accidents.

    PubMed

    Takahashi, F; Endo, A

    2007-01-01

    A system utilising radiation transport codes has been developed to derive accurate dose distributions in a human body for radiological accidents. A suitable model is quite essential for a numerical analysis. Therefore, two tools were developed to setup a 'problem-dependent' input file, defining a radiation source and an exposed person to simulate the radiation transport in an accident with the Monte Carlo calculation codes-MCNP and MCNPX. Necessary resources are defined by a dialogue method with a generally used personal computer for both the tools. The tools prepare human body and source models described in the input file format of the employed Monte Carlo codes. The tools were validated for dose assessment in comparison with a past criticality accident and a hypothesized exposure. PMID:17510203

  5. A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction

    NASA Technical Reports Server (NTRS)

    Bockelie, Michael J.; Eiseman, Peter R.

    1990-01-01

    A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.

  6. Accurate numerical verification of the instanton method for macroscopic quantum tunneling: Dynamics of phase slips

    SciTech Connect

    Danshita, Ippei; Polkovnikov, Anatoli

    2010-09-01

    We study the quantum dynamics of supercurrents of one-dimensional Bose gases in a ring optical lattice to verify instanton methods applied to coherent macroscopic quantum tunneling (MQT). We directly simulate the real-time quantum dynamics of supercurrents, where a coherent oscillation between two macroscopically distinct current states occurs due to MQT. The tunneling rate extracted from the coherent oscillation is compared with that given by the instanton method. We find that the instanton method is quantitatively accurate when the effective Planck's constant is sufficiently small. We also find phase slips associated with the oscillations.

  7. Third-order-accurate numerical methods for efficient, large time-step solutions of mixed linear and nonlinear problems

    SciTech Connect

    Cobb, J.W.

    1995-02-01

    There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.

  8. Accurate numerical solutions of conservative nonlinear oscillators

    NASA Astrophysics Data System (ADS)

    Khan, Najeeb Alam; Nasir Uddin, Khan; Nadeem Alam, Khan

    2014-12-01

    The objective of this paper is to present an investigation to analyze the vibration of a conservative nonlinear oscillator in the form u" + lambda u + u^(2n-1) + (1 + epsilon^2 u^(4m))^(1/2) = 0 for any arbitrary power of n and m. This method converts the differential equation to sets of algebraic equations and solve numerically. We have presented for three different cases: a higher order Duffing equation, an equation with irrational restoring force and a plasma physics equation. It is also found that the method is valid for any arbitrary order of n and m. Comparisons have been made with the results found in the literature the method gives accurate results.

  9. Accurate multiscale finite element method for numerical simulation of two-phase flow in fractured media using discrete-fracture model

    NASA Astrophysics Data System (ADS)

    Zhang, Na; Yao, Jun; Huang, Zhaoqin; Wang, Yueying

    2013-06-01

    Numerical simulation in naturally fractured media is challenging because of the coexistence of porous media and fractures on multiple scales that need to be coupled. We present a new approach to reservoir simulation that gives accurate resolution of both large-scale and fine-scale flow patterns. Multiscale methods are suitable for this type of modeling, because it enables capturing the large scale behavior of the solution without solving all the small features. Dual-porosity models in view of their strength and simplicity can be mainly used for sugar-cube representation of fractured media. In such a representation, the transfer function between the fracture and the matrix block can be readily calculated for water-wet media. For a mixed-wet system, the evaluation of the transfer function becomes complicated due to the effect of gravity. In this work, we use a multiscale finite element method (MsFEM) for two-phase flow in fractured media using the discrete-fracture model. By combining MsFEM with the discrete-fracture model, we aim towards a numerical scheme that facilitates fractured reservoir simulation without upscaling. MsFEM uses a standard Darcy model to approximate the pressure and saturation on a coarse grid, whereas fine scale effects are captured through basis functions constructed by solving local flow problems using the discrete-fracture model. The accuracy and the robustness of MsFEM are shown through several examples. In the first example, we consider several small fractures in a matrix and then compare the results solved by the finite element method. Then, we use the MsFEM in more complex models. The results indicate that the MsFEM is a promising path toward direct simulation of highly resolution geomodels.

  10. Accurate complex scaling of three dimensional numerical potentials

    SciTech Connect

    Cerioni, Alessandro; Genovese, Luigi; Duchemin, Ivan; Deutsch, Thierry

    2013-05-28

    The complex scaling method, which consists in continuing spatial coordinates into the complex plane, is a well-established method that allows to compute resonant eigenfunctions of the time-independent Schroedinger operator. Whenever it is desirable to apply the complex scaling to investigate resonances in physical systems defined on numerical discrete grids, the most direct approach relies on the application of a similarity transformation to the original, unscaled Hamiltonian. We show that such an approach can be conveniently implemented in the Daubechies wavelet basis set, featuring a very promising level of generality, high accuracy, and no need for artificial convergence parameters. Complex scaling of three dimensional numerical potentials can be efficiently and accurately performed. By carrying out an illustrative resonant state computation in the case of a one-dimensional model potential, we then show that our wavelet-based approach may disclose new exciting opportunities in the field of computational non-Hermitian quantum mechanics.

  11. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  12. Higher order accurate partial implicitization: An unconditionally stable fourth-order-accurate explicit numerical technique

    NASA Technical Reports Server (NTRS)

    Graves, R. A., Jr.

    1975-01-01

    The previously obtained second-order-accurate partial implicitization numerical technique used in the solution of fluid dynamic problems was modified with little complication to achieve fourth-order accuracy. The Von Neumann stability analysis demonstrated the unconditional linear stability of the technique. The order of the truncation error was deduced from the Taylor series expansions of the linearized difference equations and was verified by numerical solutions to Burger's equation. For comparison, results were also obtained for Burger's equation using a second-order-accurate partial-implicitization scheme, as well as the fourth-order scheme of Kreiss.

  13. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  14. Accurate, meshless methods for magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.; Raives, Matthias J.

    2016-01-01

    Recently, we explored new meshless finite-volume Lagrangian methods for hydrodynamics: the `meshless finite mass' (MFM) and `meshless finite volume' (MFV) methods; these capture advantages of both smoothed particle hydrodynamics (SPH) and adaptive mesh refinement (AMR) schemes. We extend these to include ideal magnetohydrodynamics (MHD). The MHD equations are second-order consistent and conservative. We augment these with a divergence-cleaning scheme, which maintains nabla \\cdot B≈ 0. We implement these in the code GIZMO, together with state-of-the-art SPH MHD. We consider a large test suite, and show that on all problems the new methods are competitive with AMR using constrained transport (CT) to ensure nabla \\cdot B=0. They correctly capture the growth/structure of the magnetorotational instability, MHD turbulence, and launching of magnetic jets, in some cases converging more rapidly than state-of-the-art AMR. Compared to SPH, the MFM/MFV methods exhibit convergence at fixed neighbour number, sharp shock-capturing, and dramatically reduced noise, divergence errors, and diffusion. Still, `modern' SPH can handle most test problems, at the cost of larger kernels and `by hand' adjustment of artificial diffusion. Compared to non-moving meshes, the new methods exhibit enhanced `grid noise' but reduced advection errors and diffusion, easily include self-gravity, and feature velocity-independent errors and superior angular momentum conservation. They converge more slowly on some problems (smooth, slow-moving flows), but more rapidly on others (involving advection/rotation). In all cases, we show divergence control beyond the Powell 8-wave approach is necessary, or all methods can converge to unphysical answers even at high resolution.

  15. Numerical Simulation of Natural Convection of a Nanofluid in an Inclined Heated Enclosure Using Two-Phase Lattice Boltzmann Method: Accurate Effects of Thermophoresis and Brownian Forces

    NASA Astrophysics Data System (ADS)

    Ahmed, Mahmoud; Eslamian, Morteza

    2015-07-01

    Laminar natural convection in differentially heated ( β = 0°, where β is the inclination angle), inclined ( β = 30° and 60°), and bottom-heated ( β = 90°) square enclosures filled with a nanofluid is investigated, using a two-phase lattice Boltzmann simulation approach. The effects of the inclination angle on Nu number and convection heat transfer coefficient are studied. The effects of thermophoresis and Brownian forces which create a relative drift or slip velocity between the particles and the base fluid are included in the simulation. The effect of thermophoresis is considered using an accurate and quantitative formula proposed by the authors. Some of the existing results on natural convection are erroneous due to using wrong thermophoresis models or simply ignoring the effect. Here we show that thermophoresis has a considerable effect on heat transfer augmentation in laminar natural convection. Our non-homogenous modeling approach shows that heat transfer in nanofluids is a function of the inclination angle and Ra number. It also reveals some details of flow behavior which cannot be captured by single-phase models. The minimum heat transfer rate is associated with β = 90° (bottom-heated) and the maximum heat transfer rate occurs in an inclination angle which varies with the Ra number.

  16. Numerical Simulation of Natural Convection of a Nanofluid in an Inclined Heated Enclosure Using Two-Phase Lattice Boltzmann Method: Accurate Effects of Thermophoresis and Brownian Forces.

    PubMed

    Ahmed, Mahmoud; Eslamian, Morteza

    2015-12-01

    Laminar natural convection in differentially heated (β = 0°, where β is the inclination angle), inclined (β = 30° and 60°), and bottom-heated (β = 90°) square enclosures filled with a nanofluid is investigated, using a two-phase lattice Boltzmann simulation approach. The effects of the inclination angle on Nu number and convection heat transfer coefficient are studied. The effects of thermophoresis and Brownian forces which create a relative drift or slip velocity between the particles and the base fluid are included in the simulation. The effect of thermophoresis is considered using an accurate and quantitative formula proposed by the authors. Some of the existing results on natural convection are erroneous due to using wrong thermophoresis models or simply ignoring the effect. Here we show that thermophoresis has a considerable effect on heat transfer augmentation in laminar natural convection. Our non-homogenous modeling approach shows that heat transfer in nanofluids is a function of the inclination angle and Ra number. It also reveals some details of flow behavior which cannot be captured by single-phase models. The minimum heat transfer rate is associated with β = 90° (bottom-heated) and the maximum heat transfer rate occurs in an inclination angle which varies with the Ra number. PMID:26183389

  17. Numerical methods for molecular dynamics

    SciTech Connect

    Skeel, R.D.

    1991-01-01

    This report summarizes our research progress to date on the use of multigrid methods for three-dimensional elliptic partial differential equations, with particular emphasis on application to the Poisson-Boltzmann equation of molecular biophysics. This research is motivated by the need for fast and accurate numerical solution techniques for three-dimensional problems arising in physics and engineering. In many applications these problems must be solved repeatedly, and the extremely large number of discrete unknowns required to accurately approximate solutions to partial differential equations in three-dimensional regions necessitates the use of efficient solution methods. This situation makes clear the importance of developing methods which are of optimal order (or nearly so), meaning that the number of operations required to solve the discrete problem is on the order of the number of discrete unknowns. Multigrid methods are generally regarded as being in this class of methods, and are in fact provably optimal order for an increasingly large class of problems. The fundamental goal of this research is to develop a fast and accurate numerical technique, based on multi-level principles, for the solutions of the Poisson-Boltzmann equation of molecular biophysics and similar equations occurring in other applications. An outline of the report is as follows. We first present some background material, followed by a survey of the literature on the use of multigrid methods for solving problems similar to the Poisson-Boltzmann equation. A short description of the software we have developed so far is then given, and numerical results are discussed. Finally, our research plans for the coming year are presented.

  18. Two highly accurate methods for pitch calibration

    NASA Astrophysics Data System (ADS)

    Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.

    2009-11-01

    Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.

  19. Exploring accurate Poisson–Boltzmann methods for biomolecular simulations

    PubMed Central

    Wang, Changhao; Wang, Jun; Cai, Qin; Li, Zhilin; Zhao, Hong-Kai; Luo, Ray

    2013-01-01

    Accurate and efficient treatment of electrostatics is a crucial step in computational analyses of biomolecular structures and dynamics. In this study, we have explored a second-order finite-difference numerical method to solve the widely used Poisson–Boltzmann equation for electrostatic analyses of realistic bio-molecules. The so-called immersed interface method was first validated and found to be consistent with the classical weighted harmonic averaging method for a diversified set of test biomolecules. The numerical accuracy and convergence behaviors of the new method were next analyzed in its computation of numerical reaction field grid potentials, energies, and atomic solvation forces. Overall similar convergence behaviors were observed as those by the classical method. Interestingly, the new method was found to deliver more accurate and better-converged grid potentials than the classical method on or nearby the molecular surface, though the numerical advantage of the new method is reduced when grid potentials are extrapolated to the molecular surface. Our exploratory study indicates the need for further improving interpolation/extrapolation schemes in addition to the developments of higher-order numerical methods that have attracted most attention in the field. PMID:24443709

  20. Accurate numerical solution of compressible, linear stability equations

    NASA Technical Reports Server (NTRS)

    Malik, M. R.; Chuang, S.; Hussaini, M. Y.

    1982-01-01

    The present investigation is concerned with a fourth order accurate finite difference method and its application to the study of the temporal and spatial stability of the three-dimensional compressible boundary layer flow on a swept wing. This method belongs to the class of compact two-point difference schemes discussed by White (1974) and Keller (1974). The method was apparently first used for solving the two-dimensional boundary layer equations. Attention is given to the governing equations, the solution technique, and the search for eigenvalues. A general purpose subroutine is employed for solving a block tridiagonal system of equations. The computer time can be reduced significantly by exploiting the special structure of two matrices.

  1. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  2. Fast and Accurate Learning When Making Discrete Numerical Estimates.

    PubMed

    Sanborn, Adam N; Beierholm, Ulrik R

    2016-04-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  3. Fast and Accurate Learning When Making Discrete Numerical Estimates

    PubMed Central

    Sanborn, Adam N.; Beierholm, Ulrik R.

    2016-01-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  4. Accurate Critical Stress Intensity Factor Griffith Crack Theory Measurements by Numerical Techniques

    PubMed Central

    Petersen, Richard C.

    2014-01-01

    Critical stress intensity factor (KIc) has been an approximation for fracture toughness using only load-cell measurements. However, artificial man-made cracks several orders of magnitude longer and wider than natural flaws have required a correction factor term (Y) that can be up to about 3 times the recorded experimental value [1-3]. In fact, over 30 years ago a National Academy of Sciences advisory board stated that empirical KIc testing was of serious concern and further requested that an accurate bulk fracture toughness method be found [4]. Now that fracture toughness can be calculated accurately by numerical integration from the load/deflection curve as resilience, work of fracture (WOF) and strain energy release (SIc) [5, 6], KIc appears to be unnecessary. However, the large body of previous KIc experimental test results found in the literature offer the opportunity for continued meta analysis with other more practical and accurate fracture toughness results using energy methods and numerical integration. Therefore, KIc is derived from the classical Griffith Crack Theory [6] to include SIc as a more accurate term for strain energy release rate (𝒢Ic), along with crack surface energy (γ), crack length (a), modulus (E), applied stress (σ), Y, crack-tip plastic zone defect region (rp) and yield strength (σys) that can all be determined from load and deflection data. Polymer matrix discontinuous quartz fiber-reinforced composites to accentuate toughness differences were prepared for flexural mechanical testing comprising of 3 mm fibers at different volume percentages from 0-54.0 vol% and at 28.2 vol% with different fiber lengths from 0.0-6.0 mm. Results provided a new correction factor and regression analyses between several numerical integration fracture toughness test methods to support KIc results. Further, bulk KIc accurate experimental values are compared with empirical test results found in literature. Also, several fracture toughness mechanisms

  5. Recommendations for Achieving Accurate Numerical Simulation of Tip Clearance Flows in Transonic Compressor Rotors

    NASA Technical Reports Server (NTRS)

    VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R,; Hathaway, Michael D.; Okiishi, Theodore H.

    2000-01-01

    The tip clearance flows of transonic compressor rotors are important because they have a significant impact on rotor and stage performance. While numerical simulations of these flows are quite sophisticated. they are seldom verified through rigorous comparisons of numerical and measured data because these kinds of measurements are rare in the detail necessary to be useful in high-speed machines. In this paper we compare measured tip clearance flow details (e.g. trajectory and radial extent) with corresponding data obtained from a numerical simulation. Recommendations for achieving accurate numerical simulation of tip clearance flows are presented based on this comparison. Laser Doppler Velocimeter (LDV) measurements acquired in a transonic compressor rotor, NASA Rotor 35, are used. The tip clearance flow field of this transonic rotor was simulated using a Navier-Stokes turbomachinery solver that incorporates an advanced k-epsilon turbulence model derived for flows that are not in local equilibrium. Comparison between measured and simulated results indicates that simulation accuracy is primarily dependent upon the ability of the numerical code to resolve important details of a wall-bounded shear layer formed by the relative motion between the over-tip leakage flow and the shroud wall. A simple method is presented for determining the strength of this shear layer.

  6. A novel numerical technique to obtain an accurate solution to the Thomas-Fermi equation

    NASA Astrophysics Data System (ADS)

    Parand, Kourosh; Yousefi, Hossein; Delkhosh, Mehdi; Ghaderi, Amin

    2016-07-01

    In this paper, a new algorithm based on the fractional order of rational Euler functions (FRE) is introduced to study the Thomas-Fermi (TF) model which is a nonlinear singular ordinary differential equation on a semi-infinite interval. This problem, using the quasilinearization method (QLM), converts to the sequence of linear ordinary differential equations to obtain the solution. For the first time, the rational Euler (RE) and the FRE have been made based on Euler polynomials. In addition, the equation will be solved on a semi-infinite domain without truncating it to a finite domain by taking FRE as basic functions for the collocation method. This method reduces the solution of this problem to the solution of a system of algebraic equations. We demonstrated that the new proposed algorithm is efficient for obtaining the value of y'(0) , y(x) and y'(x) . Comparison with some numerical and analytical solutions shows that the present solution is highly accurate.

  7. Numerical methods in structural mechanics

    NASA Astrophysics Data System (ADS)

    Obraztsov, I. F.

    The papers contained in this volume focus on numerical, numerical-analytical, and theoretical methods for dealing with strength, stability, and dynamics problems in the design of the structural elements of flight vehicles. Topics discussed include the solution of homogeneous boundary value problems for systems of ordinary differential equations modified by a difference factorization method, a study of the rupture strength of a welded joint between plates, singular solutions in mixed problems for a wedge and a half-strip, and a thermoelasticity problem for an open-profile cylindrical shell with a localized temperature field.

  8. On Numerical Methods For Hypersonic Turbulent Flows

    NASA Astrophysics Data System (ADS)

    Yee, H. C.; Sjogreen, B.; Shu, C. W.; Wang, W.; Magin, T.; Hadjadj, A.

    2011-05-01

    Proper control of numerical dissipation in numerical methods beyond the standard shock-capturing dissipation at discontinuities is an essential element for accurate and stable simulation of hypersonic turbulent flows, including combustion, and thermal and chemical nonequilibrium flows. Unlike rapidly developing shock interaction flows, turbulence computations involve long time integrations. Improper control of numerical dissipation from one time step to another would be compounded over time, resulting in the smearing of turbulent fluctuations to an unrecognizable form. Hypersonic turbulent flows around re- entry space vehicles involve mixed steady strong shocks and turbulence with unsteady shocklets that pose added computational challenges. Stiffness of the source terms and material mixing in combustion pose yet other types of numerical challenges. A low dissipative high order well- balanced scheme, which can preserve certain non-trivial steady solutions of the governing equations exactly, may help minimize some of these difficulties. For stiff reactions it is well known that the wrong propagation speed of discontinuities occurs due to the under-resolved numerical solutions in both space and time. Schemes to improve the wrong propagation speed of discontinuities for systems of stiff reacting flows remain a challenge for algorithm development. Some of the recent algorithm developments for direct numerical simulations (DNS) and large eddy simulations (LES) for the subject physics, including the aforementioned numerical challenges, will be discussed.

  9. Numerical methods for molecular dynamics. Progress report

    SciTech Connect

    Skeel, R.D.

    1991-12-31

    This report summarizes our research progress to date on the use of multigrid methods for three-dimensional elliptic partial differential equations, with particular emphasis on application to the Poisson-Boltzmann equation of molecular biophysics. This research is motivated by the need for fast and accurate numerical solution techniques for three-dimensional problems arising in physics and engineering. In many applications these problems must be solved repeatedly, and the extremely large number of discrete unknowns required to accurately approximate solutions to partial differential equations in three-dimensional regions necessitates the use of efficient solution methods. This situation makes clear the importance of developing methods which are of optimal order (or nearly so), meaning that the number of operations required to solve the discrete problem is on the order of the number of discrete unknowns. Multigrid methods are generally regarded as being in this class of methods, and are in fact provably optimal order for an increasingly large class of problems. The fundamental goal of this research is to develop a fast and accurate numerical technique, based on multi-level principles, for the solutions of the Poisson-Boltzmann equation of molecular biophysics and similar equations occurring in other applications. An outline of the report is as follows. We first present some background material, followed by a survey of the literature on the use of multigrid methods for solving problems similar to the Poisson-Boltzmann equation. A short description of the software we have developed so far is then given, and numerical results are discussed. Finally, our research plans for the coming year are presented.

  10. Numerical methods for turbulent flow

    NASA Astrophysics Data System (ADS)

    Turner, James C., Jr.

    1988-09-01

    It has generally become accepted that the Navier-Strokes equations predict the dynamic behavior of turbulent as well as laminar flows of a fluid at a point in space away form a discontinuity such as a shock wave. Turbulence is also closely related to the phenomena of non-uniqueness of solutions of the Navier-Strokes equations. These second order, nonlinear partial differential equations can be solved analytically for only a few simple flows. Turbulent flow fields are much to complex to lend themselves to these few analytical methods. Numerical methods, therefore, offer the only possibility of achieving a solution of turbulent flow equations. In spite of recent advances in computer technology, the direct solution, by discrete methods, of the Navier-Strokes equations for turbulent flow fields is today, and in the foreseeable future, impossible. Thus the only economically feasible way to solve practical turbulent flow problems numerically is to use statistically averaged equations governing mean-flow quantities. The objective is to study some recent developments relating to the use of numerical methods to study turbulent flow.

  11. Numerical methods for multibody systems

    NASA Technical Reports Server (NTRS)

    Glowinski, Roland; Nasser, Mahmoud G.

    1994-01-01

    This article gives a brief summary of some results obtained by Nasser on modeling and simulation of inequality problems in multibody dynamics. In particular, the augmented Lagrangian method discussed here is applied to a constrained motion problem with impulsive inequality constraints. A fundamental characteristic of the multibody dynamics problem is the lack of global convexity of its Lagrangian. The problem is transformed into a convex analysis problem by localization (piecewise linearization), where the augmented Lagrangian has been successfully used. A model test problem is considered and a set of numerical experiments is presented.

  12. Accurate wavelength calibration method for flat-field grating spectrometers.

    PubMed

    Du, Xuewei; Li, Chaoyang; Xu, Zhe; Wang, Qiuping

    2011-09-01

    A portable spectrometer prototype is built to study wavelength calibration for flat-field grating spectrometers. An accurate calibration method called parameter fitting is presented. Both optical and structural parameters of the spectrometer are included in the wavelength calibration model, which accurately describes the relationship between wavelength and pixel position. Along with higher calibration accuracy, the proposed calibration method can provide information about errors in the installation of the optical components, which will be helpful for spectrometer alignment. PMID:21929865

  13. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  14. Numerical Methods for Stochastic Partial Differential Equations

    SciTech Connect

    Sharp, D.H.; Habib, S.; Mineev, M.B.

    1999-07-08

    This is the final report of a Laboratory Directed Research and Development (LDRD) project at the Los Alamos National laboratory (LANL). The objectives of this proposal were (1) the development of methods for understanding and control of spacetime discretization errors in nonlinear stochastic partial differential equations, and (2) the development of new and improved practical numerical methods for the solutions of these equations. The authors have succeeded in establishing two methods for error control: the functional Fokker-Planck equation for calculating the time discretization error and the transfer integral method for calculating the spatial discretization error. In addition they have developed a new second-order stochastic algorithm for multiplicative noise applicable to the case of colored noises, and which requires only a single random sequence generation per time step. All of these results have been verified via high-resolution numerical simulations and have been successfully applied to physical test cases. They have also made substantial progress on a longstanding problem in the dynamics of unstable fluid interfaces in porous media. This work has lead to highly accurate quasi-analytic solutions of idealized versions of this problem. These may be of use in benchmarking numerical solutions of the full stochastic PDEs that govern real-world problems.

  15. Numerical methods for problems in computational aeroacoustics

    NASA Astrophysics Data System (ADS)

    Mead, Jodi Lorraine

    1998-12-01

    A goal of computational aeroacoustics is the accurate calculation of noise from a jet in the far field. This work concerns the numerical aspects of accurately calculating acoustic waves over large distances and long time. More specifically, the stability, efficiency, accuracy, dispersion and dissipation in spatial discretizations, time stepping schemes, and absorbing boundaries for the direct solution of wave propagation problems are determined. Efficient finite difference methods developed by Tam and Webb, which minimize dispersion and dissipation, are commonly used for the spatial and temporal discretization. Alternatively, high order pseudospectral methods can be made more efficient by using the grid transformation introduced by Kosloff and Tal-Ezer. Work in this dissertation confirms that the grid transformation introduced by Kosloff and Tal-Ezer is not spectrally accurate because, in the limit, the grid transformation forces zero derivatives at the boundaries. If a small number of grid points are used, it is shown that approximations with the Chebyshev pseudospectral method with the Kosloff and Tal-Ezer grid transformation are as accurate as with the Chebyshev pseudospectral method. This result is based on the analysis of the phase and amplitude errors of these methods, and their use for the solution of a benchmark problem in computational aeroacoustics. For the grid transformed Chebyshev method with a small number of grid points it is, however, more appropriate to compare its accuracy with that of high- order finite difference methods. This comparison, for an order of accuracy 10-3 for a benchmark problem in computational aeroacoustics, is performed for the grid transformed Chebyshev method and the fourth order finite difference method of Tam. Solutions with the finite difference method are as accurate. and the finite difference method is more efficient than, the Chebyshev pseudospectral method with the grid transformation. The efficiency of the Chebyshev

  16. Numerical methods for supersonic astrophysical jets

    NASA Astrophysics Data System (ADS)

    Ha, Youngsoo

    2003-09-01

    The Euler equations of gas dynamics are used for the simulation of general astrophysical fluid flows including high Mach number astrophysical jets with radiative cooling. To accurately compute supersonic jet solutions with sharp resolution of shock waves, three modern numerical methods for gas dynamics were used: (1)a second-order Godunov method in LeVeque's software package CLAWPACK, (2)the Nessyahu-Tadmor-Kurganov (NTK) central hyperbolic scheme, and (3)the WENO-LF (Weighted Essentially Non-Oscillatory Lax-Friedrichs) scheme. Then simulations of supersonic astrophysical jets were compared, first without and then with radiative cooling. CLAWPACK consists of routines for solving time-dependent nonlinear hyperbolic conservation laws based on higher order Godunov methods and approximate Riemann problem solutions; the NTK scheme solves conservation laws using a modified Lax-Friedrichs central difference method without appealing to Riemann problem solutions; and the WENO-LF finite difference scheme is based on the Essentially Non-Oscillatory (ENO) idea by using Lax- Friedrichs flux splitting. The ENO method constructs a solution using the smoothness of the interpolating polynomial on given stencils; on the other hand, the WENO scheme uses a convex combination of the interpolate functions on all candidate stencils. The third-order and fifth-order WENO-LF methods were used to simulate the high Mach number jets. Appropriate numerical methods for incorporating radiative cooling in these numerical methods are also discussed. Interactions of supersonic jets with their environments (jet-“blob” interactions) are shown after modifying the codes to handle high Mach numbers and radiative cooling.

  17. Orbital Advection by Interpolation: A Fast and Accurate Numerical Scheme for Super-Fast MHD Flows

    SciTech Connect

    Johnson, B M; Guan, X; Gammie, F

    2008-04-11

    In numerical models of thin astrophysical disks that use an Eulerian scheme, gas orbits supersonically through a fixed grid. As a result the timestep is sharply limited by the Courant condition. Also, because the mean flow speed with respect to the grid varies with position, the truncation error varies systematically with position. For hydrodynamic (unmagnetized) disks an algorithm called FARGO has been developed that advects the gas along its mean orbit using a separate interpolation substep. This relaxes the constraint imposed by the Courant condition, which now depends only on the peculiar velocity of the gas, and results in a truncation error that is more nearly independent of position. This paper describes a FARGO-like algorithm suitable for evolving magnetized disks. Our method is second order accurate on a smooth flow and preserves {del} {center_dot} B = 0 to machine precision. The main restriction is that B must be discretized on a staggered mesh. We give a detailed description of an implementation of the code and demonstrate that it produces the expected results on linear and nonlinear problems. We also point out how the scheme might be generalized to make the integration of other supersonic/super-fast flows more efficient. Although our scheme reduces the variation of truncation error with position, it does not eliminate it. We show that the residual position dependence leads to characteristic radial variations in the density over long integrations.

  18. A Simple and Accurate Method for Measuring Enzyme Activity.

    ERIC Educational Resources Information Center

    Yip, Din-Yan

    1997-01-01

    Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…

  19. Numerical methods used in fusion science numerical modeling

    NASA Astrophysics Data System (ADS)

    Yagi, M.

    2015-04-01

    The dynamics of burning plasma is very complicated physics, which is dominated by multi-scale and multi-physics phenomena. To understand such phenomena, numerical simulations are indispensable. Fundamentals of numerical methods used in fusion science numerical modeling are briefly discussed in this paper. In addition, the parallelization technique such as open multi processing (OpenMP) and message passing interface (MPI) parallel programing are introduced and the loop-level parallelization is shown as an example.

  20. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    ERIC Educational Resources Information Center

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  1. Fast and accurate determination of the Wigner rotation matrices in the fast multipole method.

    PubMed

    Dachsel, Holger

    2006-04-14

    In the rotation based fast multipole method the accurate determination of the Wigner rotation matrices is essential. The combination of two recurrence relations and the control of the error accumulations allow a very precise determination of the Wigner rotation matrices. The recurrence formulas are simple, efficient, and numerically stable. The advantages over other recursions are documented. PMID:16626188

  2. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  3. Accurate upwind-monotone (nonoscillatory) methods for conservation laws

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1992-01-01

    The well known MUSCL scheme of Van Leer is constructed using a piecewise linear approximation. The MUSCL scheme is second order accurate at the smooth part of the solution except at extrema where the accuracy degenerates to first order due to the monotonicity constraint. To construct accurate schemes which are free from oscillations, the author introduces the concept of upwind monotonicity. Several classes of schemes, which are upwind monotone and of uniform second or third order accuracy are then presented. Results for advection with constant speed are shown. It is also shown that the new scheme compares favorably with state of the art methods.

  4. Accurate compressed look up table method for CGH in 3D holographic display.

    PubMed

    Gao, Chuan; Liu, Juan; Li, Xin; Xue, Gaolei; Jia, Jia; Wang, Yongtian

    2015-12-28

    Computer generated hologram (CGH) should be obtained with high accuracy and high speed in 3D holographic display, and most researches focus on the high speed. In this paper, a simple and effective computation method for CGH is proposed based on Fresnel diffraction theory and look up table. Numerical simulations and optical experiments are performed to demonstrate its feasibility. The proposed method can obtain more accurate reconstructed images with lower memory usage compared with split look up table method and compressed look up table method without sacrificing the computational speed in holograms generation, so it is called accurate compressed look up table method (AC-LUT). It is believed that AC-LUT method is an effective method to calculate the CGH of 3D objects for real-time 3D holographic display where the huge information data is required, and it could provide fast and accurate digital transmission in various dynamic optical fields in the future. PMID:26831987

  5. Accurate Method for Determining Adhesion of Cantilever Beams

    SciTech Connect

    Michalske, T.A.; de Boer, M.P.

    1999-01-08

    Using surface micromachined samples, we demonstrate the accurate measurement of cantilever beam adhesion by using test structures which are adhered over long attachment lengths. We show that this configuration has a deep energy well, such that a fracture equilibrium is easily reached. When compared to the commonly used method of determining the shortest attached beam, the present method is much less sensitive to variations in surface topography or to details of capillary drying.

  6. Accurate method for determining adhesion of cantilever beams

    SciTech Connect

    de Boer, M.P.; Michalske, T.A.

    1999-07-01

    Using surface micromachined samples, we demonstrate the accurate measurement of cantilever beam adhesion by using test structures which are adhered over long attachment lengths. We show that this configuration has a deep energy well, such that a fracture equilibrium is easily reached. When compared to the commonly used method of determining the shortest attached beam, the present method is much less sensitive to variations in surface topography or to details of capillary drying. {copyright} {ital 1999 American Institute of Physics.}

  7. Robust and Accurate Shock Capturing Method for High-Order Discontinuous Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Atkins, Harold L.; Pampell, Alyssa

    2011-01-01

    A simple yet robust and accurate approach for capturing shock waves using a high-order discontinuous Galerkin (DG) method is presented. The method uses the physical viscous terms of the Navier-Stokes equations as suggested by others; however, the proposed formulation of the numerical viscosity is continuous and compact by construction, and does not require the solution of an auxiliary diffusion equation. This work also presents two analyses that guided the formulation of the numerical viscosity and certain aspects of the DG implementation. A local eigenvalue analysis of the DG discretization applied to a shock containing element is used to evaluate the robustness of several Riemann flux functions, and to evaluate algorithm choices that exist within the underlying DG discretization. A second analysis examines exact solutions to the DG discretization in a shock containing element, and identifies a "model" instability that will inevitably arise when solving the Euler equations using the DG method. This analysis identifies the minimum viscosity required for stability. The shock capturing method is demonstrated for high-speed flow over an inviscid cylinder and for an unsteady disturbance in a hypersonic boundary layer. Numerical tests are presented that evaluate several aspects of the shock detection terms. The sensitivity of the results to model parameters is examined with grid and order refinement studies.

  8. Method for Accurately Calibrating a Spectrometer Using Broadband Light

    NASA Technical Reports Server (NTRS)

    Simmons, Stephen; Youngquist, Robert

    2011-01-01

    A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more accurate calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be accurate to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this predicted pattern, errors in the wavelength assignment made by the spectrometer can be determined.

  9. Final Report for "Accurate Numerical Models of the Secondary Electron Yield from Grazing-incidence Collisions".

    SciTech Connect

    Seth A Veitzer

    2008-10-21

    Effects of stray electrons are a main factor limiting performance of many accelerators. Because heavy-ion fusion (HIF) accelerators will operate in regimes of higher current and with walls much closer to the beam than accelerators operating today, stray electrons might have a large, detrimental effect on the performance of an HIF accelerator. A primary source of stray electrons is electrons generated when halo ions strike the beam pipe walls. There is some research on these types of secondary electrons for the HIF community to draw upon, but this work is missing one crucial ingredient: the effect of grazing incidence. The overall goal of this project was to develop the numerical tools necessary to accurately model the effect of grazing incidence on the behavior of halo ions in a HIF accelerator, and further, to provide accurate models of heavy ion stopping powers with applications to ICF, WDM, and HEDP experiments.

  10. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models

    NASA Astrophysics Data System (ADS)

    Blackman, Jonathan; Field, Scott E.; Galley, Chad R.; Szilágyi, Béla; Scheel, Mark A.; Tiglio, Manuel; Hemberger, Daniel A.

    2015-09-01

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic -2Yℓm waveform modes resolved by the NR code up to ℓ=8 . We compare our surrogate model to effective one body waveforms from 50 M⊙ to 300 M⊙ for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  11. Accurate projector calibration method by using an optical coaxial camera.

    PubMed

    Huang, Shujun; Xie, Lili; Wang, Zhangying; Zhang, Zonghua; Gao, Feng; Jiang, Xiangqian

    2015-02-01

    Digital light processing (DLP) projectors have been widely utilized to project digital structured-light patterns in 3D imaging systems. In order to obtain accurate 3D shape data, it is important to calibrate DLP projectors to obtain the internal parameters. The existing projector calibration methods have complicated procedures or low accuracy of the obtained parameters. This paper presents a novel method to accurately calibrate a DLP projector by using an optical coaxial camera. The optical coaxial geometry is realized by a plate beam splitter, so the DLP projector can be treated as a true inverse camera. A plate having discrete markers on the surface is used to calibrate the projector. The corresponding projector pixel coordinate of each marker on the plate is determined by projecting vertical and horizontal sinusoidal fringe patterns on the plate surface and calculating the absolute phase. The internal parameters of the DLP projector are obtained by the corresponding point pair between the projector pixel coordinate and the world coordinate of discrete markers. Experimental results show that the proposed method can accurately calibrate the internal parameters of a DLP projector. PMID:25967789

  12. Reverse radiance: a fast accurate method for determining luminance

    NASA Astrophysics Data System (ADS)

    Moore, Kenneth E.; Rykowski, Ronald F.; Gangadhara, Sanjay

    2012-10-01

    Reverse ray tracing from a region of interest backward to the source has long been proposed as an efficient method of determining luminous flux. The idea is to trace rays only from where the final flux needs to be known back to the source, rather than tracing in the forward direction from the source outward to see where the light goes. Once the reverse ray reaches the source, the radiance the equivalent forward ray would have represented is determined and the resulting flux computed. Although reverse ray tracing is conceptually simple, the method critically depends upon an accurate source model in both the near and far field. An overly simplified source model, such as an ideal Lambertian surface substantially detracts from the accuracy and thus benefit of the method. This paper will introduce an improved method of reverse ray tracing that we call Reverse Radiance that avoids assumptions about the source properties. The new method uses measured data from a Source Imaging Goniometer (SIG) that simultaneously measures near and far field luminous data. Incorporating this data into a fast reverse ray tracing integration method yields fast, accurate data for a wide variety of illumination problems.

  13. Accurate method of modeling cluster scaling relations in modified gravity

    NASA Astrophysics Data System (ADS)

    He, Jian-hua; Li, Baojiu

    2016-06-01

    We propose a new method to model cluster scaling relations in modified gravity. Using a suite of nonradiative hydrodynamical simulations, we show that the scaling relations of accumulated gas quantities, such as the Sunyaev-Zel'dovich effect (Compton-y parameter) and the x-ray Compton-y parameter, can be accurately predicted using the known results in the Λ CDM model with a precision of ˜3 % . This method provides a reliable way to analyze the gas physics in modified gravity using the less demanding and much more efficient pure cold dark matter simulations. Our results therefore have important theoretical and practical implications in constraining gravity using cluster surveys.

  14. A numerical method for cardiac mechanoelectric simulations.

    PubMed

    Pathmanathan, Pras; Whiteley, Jonathan P

    2009-05-01

    Much effort has been devoted to developing numerical techniques for solving the equations that describe cardiac electrophysiology, namely the monodomain equations and bidomain equations. Only a limited selection of publications, however, address the development of numerical techniques for mechanoelectric simulations where cardiac electrophysiology is coupled with deformation of cardiac tissue. One problem commonly encountered in mechanoelectric simulations is instability of the coupled numerical scheme. In this study, we develop a stable numerical scheme for mechanoelectric simulations. A number of convergence tests are carried out using this stable technique for simulations where deformations are of the magnitude typically observed in a beating heart. These convergence tests demonstrate that accurate computation of tissue deformation requires a nodal spacing of around 1 mm in the mesh used to calculate tissue deformation. This is a much finer computational grid than has previously been acknowledged, and has implications for the computational efficiency of the resulting numerical scheme. PMID:19263223

  15. AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)

    EPA Science Inventory

    Abstract

    A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...

  16. Comparison of methods for numerical calculation of continuum damping

    SciTech Connect

    Bowden, G. W.; Hole, M. J.; Dennis, G. R.; Könies, A.; Gorelenkov, N. N.

    2014-05-15

    Continuum resonance damping is an important factor in determining the stability of certain global modes in fusion plasmas. A number of analytic and numerical approaches have been developed to compute this damping, particularly, in the case of the toroidicity-induced shear Alfvén eigenmode. This paper compares results obtained using an analytical perturbative approach with those found using resistive and complex contour numerical approaches. It is found that the perturbative method does not provide accurate agreement with reliable numerical methods for the range of parameters examined. This discrepancy exists even in the limit where damping approaches zero. When the perturbative technique is implemented using a standard finite element method, the damping estimate fails to converge with radial grid resolution. The finite elements used cannot accurately represent the eigenmode in the region of the continuum resonance, regardless of the number of radial grid points used.

  17. Accurate optical CD profiler based on specialized finite element method

    NASA Astrophysics Data System (ADS)

    Carrero, Jesus; Perçin, Gökhan

    2012-03-01

    As the semiconductor industry is moving to very low-k1 patterning solutions, the metrology problems facing process engineers are becoming much more complex. Choosing the right optical critical dimension (OCD) metrology technique is essential for bridging the metrology gap and achieving the required manufacturing volume throughput. The critical dimension scanning electron microscope (CD-SEM) measurement is usually distorted by the high aspect ratio of the photoresist and hard mask layers. CD-SEM measurements cease to correlate with complex three-dimensional profiles, such as the cases for double patterning and FinFETs, thus necessitating sophisticated, accurate and fast computational methods to bridge the gap. In this work, a suite of computational methods that complement advanced OCD equipment, and enabling them to operate at higher accuracies, are developed. In this article, a novel method for accurately modeling OCD profiles is presented. A finite element formulation in primal form is used to discretize the equations. The implementation uses specialized finite element spaces to solve Maxwell equations in two dimensions.

  18. Numerical methods for characterization of synchrotron radiation based on the Wigner function method

    NASA Astrophysics Data System (ADS)

    Tanaka, Takashi

    2014-06-01

    Numerical characterization of synchrotron radiation based on the Wigner function method is explored in order to accurately evaluate the light source performance. A number of numerical methods to compute the Wigner functions for typical synchrotron radiation sources such as bending magnets, undulators and wigglers, are presented, which significantly improve the computation efficiency and reduce the total computation time. As a practical example of the numerical characterization, optimization of betatron functions to maximize the brilliance of undulator radiation is discussed.

  19. Recommendations for accurate numerical blood flow simulations of stented intracranial aneurysms.

    PubMed

    Janiga, Gábor; Berg, Philipp; Beuing, Oliver; Neugebauer, Mathias; Gasteiger, Rocco; Preim, Bernhard; Rose, Georg; Skalej, Martin; Thévenin, Dominique

    2013-06-01

    The number of scientific publications dealing with stented intracranial aneurysms is rapidly increasing. Powerful computational facilities are now available; an accurate computational modeling of hemodynamics in patient-specific configurations is, however, still being sought. Furthermore, there is still no general agreement on the quantities that should be computed and on the most adequate analysis for intervention support. In this article, the accurate representation of patient geometry is first discussed, involving successive improvements. Concerning the second step, the mesh required for the numerical simulation is especially challenging when deploying a stent with very fine wire structures. Third, the description of the fluid properties is a major challenge. Finally, a founded quantitative analysis of the simulation results is obviously needed to support interventional decisions. In the present work, an attempt has been made to review the most important steps for a high-quality computational fluid dynamics computation of virtually stented intracranial aneurysms. In consequence, this leads to concrete recommendations, whereby the obtained results are not discussed for their medical relevance but for the evaluation of their quality. This investigation might hopefully be helpful for further studies considering stent deployment in patient-specific geometries, in particular regarding the generation of the most appropriate computational model. PMID:23729530

  20. Novel dispersion tolerant interferometry method for accurate measurements of displacement

    NASA Astrophysics Data System (ADS)

    Bradu, Adrian; Maria, Michael; Leick, Lasse; Podoleanu, Adrian G.

    2015-05-01

    We demonstrate that the recently proposed master-slave interferometry method is able to provide true dispersion free depth profiles in a spectrometer-based set-up that can be used for accurate displacement measurements in sensing and optical coherence tomography. The proposed technique is based on correlating the channelled spectra produced by the linear camera in the spectrometer with previously recorded masks. As such technique is not based on Fourier transformations (FT), it does not require any resampling of data and is immune to any amounts of dispersion left unbalanced in the system. In order to prove the tolerance of technique to dispersion, different lengths of optical fiber are used in the interferometer to introduce dispersion and it is demonstrated that neither the sensitivity profile versus optical path difference (OPD) nor the depth resolution are affected. In opposition, it is shown that the classical FT based methods using calibrated data provide less accurate optical path length measurements and exhibit a quicker decays of sensitivity with OPD.

  1. Accurate camera calibration method specialized for virtual studios

    NASA Astrophysics Data System (ADS)

    Okubo, Hidehiko; Yamanouchi, Yuko; Mitsumine, Hideki; Fukaya, Takashi; Inoue, Seiki

    2008-02-01

    Virtual studio is a popular technology for TV programs, that makes possible to synchronize computer graphics (CG) to realshot image in camera motion. Normally, the geometrical matching accuracy between CG and realshot image is not expected so much on real-time system, we sometimes compromise on directions, not to come out the problem. So we developed the hybrid camera calibration method and CG generating system to achieve the accurate geometrical matching of CG and realshot on virtual studio. Our calibration method is intended for the camera system on platform and tripod with rotary encoder, that can measure pan/tilt angles. To solve the camera model and initial pose, we enhanced the bundle adjustment algorithm to fit the camera model, using pan/tilt data as known parameters, and optimizing all other parameters invariant against pan/tilt value. This initialization yields high accurate camera position and orientation consistent with any pan/tilt values. Also we created CG generator implemented the lens distortion function with GPU programming. By applying the lens distortion parameters obtained by camera calibration process, we could get fair compositing results.

  2. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    PubMed

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases). PMID:26430979

  3. PolyPole-1: An accurate numerical algorithm for intra-granular fission gas release

    NASA Astrophysics Data System (ADS)

    Pizzocri, D.; Rabiti, C.; Luzzi, L.; Barani, T.; Van Uffelen, P.; Pastore, G.

    2016-09-01

    The transport of fission gas from within the fuel grains to the grain boundaries (intra-granular fission gas release) is a fundamental controlling mechanism of fission gas release and gaseous swelling in nuclear fuel. Hence, accurate numerical solution of the corresponding mathematical problem needs to be included in fission gas behaviour models used in fuel performance codes. Under the assumption of equilibrium between trapping and resolution, the process can be described mathematically by a single diffusion equation for the gas atom concentration in a grain. In this paper, we propose a new numerical algorithm (PolyPole-1) to efficiently solve the fission gas diffusion equation in time-varying conditions. The PolyPole-1 algorithm is based on the analytic modal solution of the diffusion equation for constant conditions, combined with polynomial corrective terms that embody the information on the deviation from constant conditions. The new algorithm is verified by comparing the results to a finite difference solution over a large number of randomly generated operation histories. Furthermore, comparison to state-of-the-art algorithms used in fuel performance codes demonstrates that the accuracy of PolyPole-1 is superior to other algorithms, with similar computational effort. Finally, the concept of PolyPole-1 may be extended to the solution of the general problem of intra-granular fission gas diffusion during non-equilibrium trapping and resolution, which will be the subject of future work.

  4. Accurate finite difference methods for time-harmonic wave propagation

    NASA Technical Reports Server (NTRS)

    Harari, Isaac; Turkel, Eli

    1994-01-01

    Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.

  5. An Accurate Projector Calibration Method Based on Polynomial Distortion Representation

    PubMed Central

    Liu, Miao; Sun, Changku; Huang, Shujun; Zhang, Zonghua

    2015-01-01

    In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system. PMID:26492247

  6. Accurate Evaluation Method of Molecular Binding Affinity from Fluctuation Frequency

    NASA Astrophysics Data System (ADS)

    Hoshino, Tyuji; Iwamoto, Koji; Ode, Hirotaka; Ohdomari, Iwao

    2008-05-01

    Exact estimation of the molecular binding affinity is significantly important for drug discovery. The energy calculation is a direct method to compute the strength of the interaction between two molecules. This energetic approach is, however, not accurate enough to evaluate a slight difference in binding affinity when distinguishing a prospective substance from dozens of candidates for medicine. Hence more accurate estimation of drug efficacy in a computer is currently demanded. Previously we proposed a concept of estimating molecular binding affinity, focusing on the fluctuation at an interface between two molecules. The aim of this paper is to demonstrate the compatibility between the proposed computational technique and experimental measurements, through several examples for computer simulations of an association of human immunodeficiency virus type-1 (HIV-1) protease and its inhibitor (an example for a drug-enzyme binding), a complexation of an antigen and its antibody (an example for a protein-protein binding), and a combination of estrogen receptor and its ligand chemicals (an example for a ligand-receptor binding). The proposed affinity estimation has proven to be a promising technique in the advanced stage of the discovery and the design of drugs.

  7. An Integrative Method for Accurate Comparative Genome Mapping

    PubMed Central

    Swidan, Firas; Rocha, Eduardo P. C; Shmoish, Michael; Pinter, Ron Y

    2006-01-01

    We present MAGIC, an integrative and accurate method for comparative genome mapping. Our method consists of two phases: preprocessing for identifying “maximal similar segments,” and mapping for clustering and classifying these segments. MAGIC's main novelty lies in its biologically intuitive clustering approach, which aims towards both calculating reorder-free segments and identifying orthologous segments. In the process, MAGIC efficiently handles ambiguities resulting from duplications that occurred before the speciation of the considered organisms from their most recent common ancestor. We demonstrate both MAGIC's robustness and scalability: the former is asserted with respect to its initial input and with respect to its parameters' values. The latter is asserted by applying MAGIC to distantly related organisms and to large genomes. We compare MAGIC to other comparative mapping methods and provide detailed analysis of the differences between them. Our improvements allow a comprehensive study of the diversity of genetic repertoires resulting from large-scale mutations, such as indels and duplications, including explicitly transposable and phagic elements. The strength of our method is demonstrated by detailed statistics computed for each type of these large-scale mutations. MAGIC enabled us to conduct a comprehensive analysis of the different forces shaping prokaryotic genomes from different clades, and to quantify the importance of novel gene content introduced by horizontal gene transfer relative to gene duplication in bacterial genome evolution. We use these results to investigate the breakpoint distribution in several prokaryotic genomes. PMID:16933978

  8. Numerical Simulation of the 2004 Indian Ocean Tsunami: Accurate Flooding and drying in Banda Aceh

    NASA Astrophysics Data System (ADS)

    Cui, Haiyang; Pietrzak, Julie; Stelling, Guus; Androsov, Alexey; Harig, Sven

    2010-05-01

    The Indian Ocean Tsunami on December 26, 2004 caused one of the largest tsunamis in recent times and led to widespread devastation and loss of life. One of the worst hit regions was Banda Aceh, which is the capital of the Aceh province, located in the northern part of Sumatra, 150km from the source of the earthquake. A German-Indonesian Tsunami Early Warning System (GITEWS) (www.gitews.de) is currently under active development. The work presented here is carried out within the GITEWS framework. One of the aims of this project is the development of accurate models with which to simulate the propagation, flooding and drying, and run-up of a tsunami. In this context, TsunAWI has been developed by the Alfred Wegener Institute; it is an explicit, () finite element model. However, the accurate numerical simulation of flooding and drying requires the conservation of mass and momentum. This is not possible in the current version of TsunAWi. The P1NC - P1element guarantees mass conservation in a global sense, yet as we show here it is important to guarantee mass conservation at the local level, that is within each individual cell. Here an unstructured grid, finite volume ocean model is presented. It is derived from the P1NC - P1 element, and is shown to be mass and momentum conserving. Then a number of simulations are presented, including dam break problems flooding over both a wet and a dry bed. Excellent agreement is found. Then we present simulations for Banda Aceh, and compare the results to on-site survey data, as well as to results from the original TsunAWI code.

  9. Numerical Computation of a Continuous-thrust State Transition Matrix Incorporating Accurate Hardware and Ephemeris Models

    NASA Technical Reports Server (NTRS)

    Ellison, Donald; Conway, Bruce; Englander, Jacob

    2015-01-01

    A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.

  10. A second order accurate embedded boundary method for the wave equation with Dirichlet data

    SciTech Connect

    Kreiss, H O; Petersson, N A

    2004-03-02

    The accuracy of Cartesian embedded boundary methods for the second order wave equation in general two-dimensional domains subject to Dirichlet boundary conditions is analyzed. Based on the analysis, we develop a numerical method where both the solution and its gradient are second order accurate. We avoid the small-cell stiffness problem without sacrificing the second order accuracy by adding a small artificial term to the Dirichlet boundary condition. Long-time stability of the method is obtained by adding a small fourth order dissipative term. Several numerical examples are provided to demonstrate the accuracy and stability of the method. The method is also used to solve the two-dimensional TM{sub z} problem for Maxwell's equations posed as a second order wave equation for the electric field coupled to ordinary differential equations for the magnetic field.

  11. Second-order accurate finite volume method for well-driven flows

    NASA Astrophysics Data System (ADS)

    Dotlić, M.; Vidović, D.; Pokorni, B.; Pušić, M.; Dimkić, M.

    2016-02-01

    We consider a finite volume method for a well-driven fluid flow in a porous medium. Due to the singularity of the well, modeling in the near-well region with standard numerical schemes results in a completely wrong total well flux and an inaccurate hydraulic head. Local grid refinement can help, but it comes at computational cost. In this article we propose two methods to address the well singularity. In the first method the flux through well faces is corrected using a logarithmic function, in a way related to the Peaceman model. Coupling this correction with a non-linear second-order accurate two-point scheme gives a greatly improved total well flux, but the resulting scheme is still inconsistent. In the second method fluxes in the near-well region are corrected by representing the hydraulic head as a sum of a logarithmic and a linear function. This scheme is second-order accurate.

  12. Evaluation of the Time-Derivative Coupling for Accurate Electronic State Transition Probabilities from Numerical Simulations.

    PubMed

    Meek, Garrett A; Levine, Benjamin G

    2014-07-01

    Spikes in the time-derivative coupling (TDC) near surface crossings make the accurate integration of the time-dependent Schrödinger equation in nonadiabatic molecular dynamics simulations a challenge. To address this issue, we present an approximation to the TDC based on a norm-preserving interpolation (NPI) of the adiabatic electronic wave functions within each time step. We apply NPI and two other schemes for computing the TDC in numerical simulations of the Landau-Zener model, comparing the simulated transfer probabilities to the exact solution. Though NPI does not require the analytical calculation of nonadiabatic coupling matrix elements, it consistently yields unsigned population transfer probability errors of ∼0.001, whereas analytical calculation of the TDC yields errors of 0.0-1.0 depending on the time step, the offset of the maximum in the TDC from the beginning of the time step, and the coupling strength. The approximation of Hammes-Schiffer and Tully yields errors intermediate between NPI and the analytical scheme. PMID:26279558

  13. An accurate numerical solution to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in rivers

    NASA Astrophysics Data System (ADS)

    Stecca, Guglielmo; Siviglia, Annunziato; Blom, Astrid

    2016-07-01

    We present an accurate numerical approximation to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in one space dimension. Our solution procedure originates from the fully-unsteady matrix-vector formulation developed in [54]. The principal part of the problem is solved by an explicit Finite Volume upwind method of the path-conservative type, by which all the variables are updated simultaneously in a coupled fashion. The solution to the principal part is embedded into a splitting procedure for the treatment of frictional source terms. The numerical scheme is extended to second-order accuracy and includes a bookkeeping procedure for handling the evolution of size stratification in the substrate. We develop a concept of balancedness for the vertical mass flux between the substrate and active layer under bed degradation, which prevents the occurrence of non-physical oscillations in the grainsize distribution of the substrate. We suitably modify the numerical scheme to respect this principle. We finally verify the accuracy in our solution to the equations, and its ability to reproduce one-dimensional morphodynamics due to streamwise and vertical sorting, using three test cases. In detail, (i) we empirically assess the balancedness of vertical mass fluxes under degradation; (ii) we study the convergence to the analytical linearised solution for the propagation of infinitesimal-amplitude waves [54], which is here employed for the first time to assess a mixed-sediment model; (iii) we reproduce Ribberink's E8-E9 flume experiment [46].

  14. Liquid propellant rocket engine combustion simulation with a time-accurate CFD method

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.; Shang, H. M.; Liaw, Paul; Hutt, J.

    1993-01-01

    Time-accurate computational fluid dynamics (CFD) algorithms are among the basic requirements as an engineering or research tool for realistic simulations of transient combustion phenomena, such as combustion instability, transient start-up, etc., inside the rocket engine combustion chamber. A time-accurate pressure based method is employed in the FDNS code for combustion model development. This is in connection with other program development activities such as spray combustion model development and efficient finite-rate chemistry solution method implementation. In the present study, a second-order time-accurate time-marching scheme is employed. For better spatial resolutions near discontinuities (e.g., shocks, contact discontinuities), a 3rd-order accurate TVD scheme for modeling the convection terms is implemented in the FDNS code. Necessary modification to the predictor/multi-corrector solution algorithm in order to maintain time-accurate wave propagation is also investigated. Benchmark 1-D and multidimensional test cases, which include the classical shock tube wave propagation problems, resonant pipe test case, unsteady flow development of a blast tube test case, and H2/O2 rocket engine chamber combustion start-up transient simulation, etc., are investigated to validate and demonstrate the accuracy and robustness of the present numerical scheme and solution algorithm.

  15. IRIS: Towards an Accurate and Fast Stage Weight Prediction Method

    NASA Astrophysics Data System (ADS)

    Taponier, V.; Balu, A.

    2002-01-01

    The knowledge of the structural mass fraction (or the mass ratio) of a given stage, which affects the performance of a rocket, is essential for the analysis of new or upgraded launchers or stages, whose need is increased by the quick evolution of the space programs and by the necessity of their adaptation to the market needs. The availability of this highly scattered variable, ranging between 0.05 and 0.15, is of primary importance at the early steps of the preliminary design studies. At the start of the staging and performance studies, the lack of frozen weight data (to be obtained later on from propulsion, trajectory and sizing studies) leads to rely on rough estimates, generally derived from printed sources and adapted. When needed, a consolidation can be acquired trough a specific analysis activity involving several techniques and implying additional effort and time. The present empirical approach allows thus to get approximated values (i.e. not necessarily accurate or consistent), inducing some result inaccuracy as well as, consequently, difficulties of performance ranking for a multiple option analysis, and an increase of the processing duration. This forms a classical harsh fact of the preliminary design system studies, insufficiently discussed to date. It appears therefore highly desirable to have, for all the evaluation activities, a reliable, fast and easy-to-use weight or mass fraction prediction method. Additionally, the latter should allow for a pre selection of the alternative preliminary configurations, making possible a global system approach. For that purpose, an attempt at modeling has been undertaken, whose objective was the determination of a parametric formulation of the mass fraction, to be expressed from a limited number of parameters available at the early steps of the project. It is based on the innovative use of a statistical method applicable to a variable as a function of several independent parameters. A specific polynomial generator

  16. Numerical Methods for Radiation Magnetohydrodynamics in Astrophysics

    SciTech Connect

    Klein, R I; Stone, J M

    2007-11-20

    We describe numerical methods for solving the equations of radiation magnetohydrodynamics (MHD) for astrophysical fluid flow. Such methods are essential for the investigation of the time-dependent and multidimensional dynamics of a variety of astrophysical systems, although our particular interest is motivated by problems in star formation. Over the past few years, the authors have been members of two parallel code development efforts, and this review reflects that organization. In particular, we discuss numerical methods for MHD as implemented in the Athena code, and numerical methods for radiation hydrodynamics as implemented in the Orion code. We discuss the challenges introduced by the use of adaptive mesh refinement in both codes, as well as the most promising directions for future developments.

  17. Numerical Comparison of Periodic MoM (Method of Moments) and BMIA (Banded Matrix Iteration Method)

    NASA Technical Reports Server (NTRS)

    Kim, Y.; Rodriguez, E.; Michel, T.

    1995-01-01

    The most popular numerical technique in rough surface scattering is the Method of Moments (MoM). Since the scattering patch size is finite, the edge current must be suppressed to obtain accurate scattering cross sections. Two standard ways to minimize the edge current are periodic boundary conditions and incident wave tapering. We compare the accuracy & computational requirements of these methods.

  18. Towards more accurate numerical modeling of impedance based high frequency harmonic vibration

    NASA Astrophysics Data System (ADS)

    Lim, Yee Yan; Kiong Soh, Chee

    2014-03-01

    The application of smart materials in various fields of engineering has recently become increasingly popular. For instance, the high frequency based electromechanical impedance (EMI) technique employing smart piezoelectric materials is found to be versatile in structural health monitoring (SHM). Thus far, considerable efforts have been made to study and improve the technique. Various theoretical models of the EMI technique have been proposed in an attempt to better understand its behavior. So far, the three-dimensional (3D) coupled field finite element (FE) model has proved to be the most accurate. However, large discrepancies between the results of the FE model and experimental tests, especially in terms of the slope and magnitude of the admittance signatures, continue to exist and are yet to be resolved. This paper presents a series of parametric studies using the 3D coupled field finite element method (FEM) on all properties of materials involved in the lead zirconate titanate (PZT) structure interaction of the EMI technique, to investigate their effect on the admittance signatures acquired. FE model updating is then performed by adjusting the parameters to match the experimental results. One of the main reasons for the lower accuracy, especially in terms of magnitude and slope, of previous FE models is the difficulty in determining the damping related coefficients and the stiffness of the bonding layer. In this study, using the hysteretic damping model in place of Rayleigh damping, which is used by most researchers in this field, and updated bonding stiffness, an improved and more accurate FE model is achieved. The results of this paper are expected to be useful for future study of the subject area in terms of research and application, such as modeling, design and optimization.

  19. Quantifying Methane Fluxes Simply and Accurately: The Tracer Dilution Method

    NASA Astrophysics Data System (ADS)

    Rella, Christopher; Crosson, Eric; Green, Roger; Hater, Gary; Dayton, Dave; Lafleur, Rick; Merrill, Ray; Tan, Sze; Thoma, Eben

    2010-05-01

    Methane is an important atmospheric constituent with a wide variety of sources, both natural and anthropogenic, including wetlands and other water bodies, permafrost, farms, landfills, and areas with significant petrochemical exploration, drilling, transport, or processing, or refining occurs. Despite its importance to the carbon cycle, its significant impact as a greenhouse gas, and its ubiquity in modern life as a source of energy, its sources and sinks in marine and terrestrial ecosystems are only poorly understood. This is largely because high quality, quantitative measurements of methane fluxes in these different environments have not been available, due both to the lack of robust field-deployable instrumentation as well as to the fact that most significant sources of methane extend over large areas (from 10's to 1,000,000's of square meters) and are heterogeneous emitters - i.e., the methane is not emitted evenly over the area in question. Quantifying the total methane emissions from such sources becomes a tremendous challenge, compounded by the fact that atmospheric transport from emission point to detection point can be highly variable. In this presentation we describe a robust, accurate, and easy-to-deploy technique called the tracer dilution method, in which a known gas (such as acetylene, nitrous oxide, or sulfur hexafluoride) is released in the same vicinity of the methane emissions. Measurements of methane and the tracer gas are then made downwind of the release point, in the so-called far-field, where the area of methane emissions cannot be distinguished from a point source (i.e., the two gas plumes are well-mixed). In this regime, the methane emissions are given by the ratio of the two measured concentrations, multiplied by the known tracer emission rate. The challenges associated with atmospheric variability and heterogeneous methane emissions are handled automatically by the transport and dispersion of the tracer. We present detailed methane flux

  20. Accurate, efficient, and (iso)geometrically flexible collocation methods for phase-field models

    NASA Astrophysics Data System (ADS)

    Gomez, Hector; Reali, Alessandro; Sangalli, Giancarlo

    2014-04-01

    We propose new collocation methods for phase-field models. Our algorithms are based on isogeometric analysis, a new technology that makes use of functions from computational geometry, such as, for example, Non-Uniform Rational B-Splines (NURBS). NURBS exhibit excellent approximability and controllable global smoothness, and can represent exactly most geometries encapsulated in Computer Aided Design (CAD) models. These attributes permitted us to derive accurate, efficient, and geometrically flexible collocation methods for phase-field models. The performance of our method is demonstrated by several numerical examples of phase separation modeled by the Cahn-Hilliard equation. We feel that our method successfully combines the geometrical flexibility of finite elements with the accuracy and simplicity of pseudo-spectral collocation methods, and is a viable alternative to classical collocation methods.

  1. Physical and Numerical Model Studies of Cross-flow Turbines Towards Accurate Parameterization in Array Simulations

    NASA Astrophysics Data System (ADS)

    Wosnik, M.; Bachant, P.

    2014-12-01

    Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of

  2. A Numerical Method for Solving Elasticity Equations with Interfaces

    PubMed Central

    Li, Zhilin; Wang, Liqun; Wang, Wei

    2012-01-01

    Solving elasticity equations with interfaces is a challenging problem for most existing methods. Nonetheless, it has wide applications in engineering and science. An accurate and efficient method is desired. In this paper, an efficient non-traditional finite element method with non-body-fitting grids is proposed to solve elasticity equations with interfaces. The main idea is to choose the test function basis to be the standard finite element basis independent of the interface and to choose the solution basis to be piecewise linear satisfying the jump conditions across the interface. The resulting linear system of equations is shown to be positive definite under certain assumptions. Numerical experiments show that this method is second order accurate in the L∞ norm for piecewise smooth solutions. More than 1.5th order accuracy is observed for solution with singularity (second derivative blows up) on the sharp-edged interface corner. PMID:22707984

  3. TOPLHA: an accurate and efficient numerical tool for analysis and design of LH antennas

    NASA Astrophysics Data System (ADS)

    Milanesio, D.; Lancellotti, V.; Meneghini, O.; Maggiora, R.; Vecchi, G.; Bilato, R.

    2007-09-01

    Auxiliary ICRF heating systems in tokamaks often involve large complex antennas, made up of several conducting straps hosted in distinct cavities that open towards the plasma. The same holds especially true in the LH regime, wherein the antennas are comprised of arrays of many phased waveguides. Upon observing that the various cavities or waveguides couple to each other only through the EM fields existing over the plasma-facing apertures, we self-consistently formulated the EM problem by a convenient set of multiple coupled integral equations. Subsequent application of the Method of Moments yields a highly sparse algebraic system; therefore formal inversion of the system matrix happens to be not so memory demanding, despite the number of unknowns may be quite large (typically 105 or so). The overall strategy has been implemented in an enhanced version of TOPICA (Torino Polytechnic Ion Cyclotron Antenna) and in a newly developed code named TOPLHA (Torino Polytechnic Lower Hybrid Antenna). Both are simulation and prediction tools for plasma facing antennas that incorporate commercial-grade 3D graphic interfaces along with an accurate description of the plasma. In this work we present the new proposed formulation along with examples of application to real life large LH antenna systems.

  4. TOPICA: an accurate and efficient numerical tool for analysis and design of ICRF antennas

    NASA Astrophysics Data System (ADS)

    Lancellotti, V.; Milanesio, D.; Maggiora, R.; Vecchi, G.; Kyrytsya, V.

    2006-07-01

    The demand for a predictive tool to help in designing ion-cyclotron radio frequency (ICRF) antenna systems for today's fusion experiments has driven the development of codes such as ICANT, RANT3D, and the early development of TOPICA (TOrino Polytechnic Ion Cyclotron Antenna) code. This paper describes the substantive evolution of TOPICA formulation and implementation that presently allow it to handle the actual geometry of ICRF antennas (with curved, solid straps, a general-shape housing, Faraday screen, etc) as well as an accurate plasma description, accounting for density and temperature profiles and finite Larmor radius effects. The antenna is assumed to be housed in a recess-like enclosure. Both goals have been attained by formally separating the problem into two parts: the vacuum region around the antenna and the plasma region inside the toroidal chamber. Field continuity and boundary conditions allow formulating of a set of two coupled integral equations for the unknown equivalent (current) sources; then the equations are reduced to a linear system by a method of moments solution scheme employing 2D finite elements defined over a 3D non-planar surface triangular-cell mesh. In the vacuum region calculations are done in the spatial (configuration) domain, whereas in the plasma region a spectral (wavenumber) representation of fields and currents is adopted, thus permitting a description of the plasma by a surface impedance matrix. Owing to this approach, any plasma model can be used in principle, and at present the FELICE code has been employed. The natural outcomes of TOPICA are the induced currents on the conductors (antenna, housing, etc) and the electric field in front of the plasma, whence the antenna circuit parameters (impedance/scattering matrices), the radiated power and the fields (at locations other than the chamber aperture) are then obtained. An accurate model of the feeding coaxial lines is also included. The theoretical model and its TOPICA

  5. A Novel Method for the Accurate Evaluation of Poisson's Ratio of Soft Polymer Materials

    PubMed Central

    Lee, Jae-Hoon; Lee, Sang-Soo; Chang, Jun-Dong; Thompson, Mark S.; Kang, Dong-Joong; Park, Sungchan

    2013-01-01

    A new method with a simple algorithm was developed to accurately measure Poisson's ratio of soft materials such as polyvinyl alcohol hydrogel (PVA-H) with a custom experimental apparatus consisting of a tension device, a micro X-Y stage, an optical microscope, and a charge-coupled device camera. In the proposed method, the initial positions of the four vertices of an arbitrarily selected quadrilateral from the sample surface were first measured to generate a 2D 1st-order 4-node quadrilateral element for finite element numerical analysis. Next, minimum and maximum principal strains were calculated from differences between the initial and deformed shapes of the quadrilateral under tension. Finally, Poisson's ratio of PVA-H was determined by the ratio of minimum principal strain to maximum principal strain. This novel method has an advantage in the accurate evaluation of Poisson's ratio despite misalignment between specimens and experimental devices. In this study, Poisson's ratio of PVA-H was 0.44 ± 0.025 (n = 6) for 2.6–47.0% elongations with a tendency to decrease with increasing elongation. The current evaluation method of Poisson's ratio with a simple measurement system can be employed to a real-time automated vision-tracking system which is used to accurately evaluate the material properties of various soft materials. PMID:23737733

  6. Accurate near-field calculation in the rigorous coupled-wave analysis method

    NASA Astrophysics Data System (ADS)

    Weismann, Martin; Gallagher, Dominic F. G.; Panoiu, Nicolae C.

    2015-12-01

    The rigorous coupled-wave analysis (RCWA) is one of the most successful and widely used methods for modeling periodic optical structures. It yields fast convergence of the electromagnetic far-field and has been adapted to model various optical devices and wave configurations. In this article, we investigate the accuracy with which the electromagnetic near-field can be calculated by using RCWA and explain the observed slow convergence and numerical artifacts from which it suffers, namely unphysical oscillations at material boundaries due to the Gibbs phenomenon. In order to alleviate these shortcomings, we also introduce a mathematical formulation for accurate near-field calculation in RCWA, for one- and two-dimensional straight and slanted diffraction gratings. This accurate near-field computational approach is tested and evaluated for several representative test-structures and configurations in order to illustrate the advantages provided by the proposed modified formulation of the RCWA.

  7. A numerical method of detecting singularity

    NASA Technical Reports Server (NTRS)

    Laporte, M.; Vignes, J.

    1978-01-01

    A numerical method is reported which determines a value C for the degree of conditioning of a matrix. This value is C = 0 for a singular matrix and has progressively larger values for matrices which are increasingly well-conditioned. This value is C sub = C max sub max (C defined by the precision of the computer) when the matrix is perfectly well conditioned.

  8. A numerical method for predicting hypersonic flowfields

    NASA Technical Reports Server (NTRS)

    Maccormack, Robert W.; Candler, Graham V.

    1989-01-01

    The flow about a body traveling at hypersonic speed is energetic enough to cause the atmospheric gases to chemically react and reach states in thermal nonequilibrium. The prediction of hypersonic flowfields requires a numerical method capable of solving the conservation equations of fluid flow, the chemical rate equations for specie formation and dissociation, and the transfer of energy relations between translational and vibrational temperature states. Because the number of equations to be solved is large, the numerical method should also be as efficient as possible. The proposed paper presents a fully implicit method that fully couples the solution of the fluid flow equations with the gas physics and chemistry relations. The method flux splits the inviscid flow terms, central differences of the viscous terms, preserves element conservation in the strong chemistry source terms, and solves the resulting block matrix equation by Gauss Seidel line relaxation.

  9. A method for producing large, accurate, economical female molds

    SciTech Connect

    Guenter, A.; Guenter, B.

    1996-11-01

    A process in which lightweight, highly accurate, economical molds can be produced for prototype and low production runs of large parts for use in composites molding has been developed. This has been achieved by developing existing milling technology, using new materials and innovative material applications to CNC mill large female molds directly. Any step that can be eliminated in the mold building process translates into savings in tooling costs through reduced labor and material requirements.

  10. Method and apparatus for accurately manipulating an object during microelectrophoresis

    DOEpatents

    Parvin, Bahram A.; Maestre, Marcos F.; Fish, Richard H.; Johnston, William E.

    1997-01-01

    An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations add reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage.

  11. Method and apparatus for accurately manipulating an object during microelectrophoresis

    DOEpatents

    Parvin, B.A.; Maestre, M.F.; Fish, R.H.; Johnston, W.E.

    1997-09-23

    An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations and reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage. 11 figs.

  12. Numerical Analysis of the Symmetric Methods

    NASA Astrophysics Data System (ADS)

    Xu, Ji-Hong; Zhang, A.-Li

    1995-03-01

    Aimed at the initial value problem of the particular second-order ordinary differential equations,y ″=f(x, y), the symmetric methods (Quinlan and Tremaine, 1990) and our methods (Xu and Zhang, 1994) have been compared in detail by integrating the artificial earth satellite orbits in this paper. In the end, we point out clearly that the integral accuracy of numerical integration of the satellite orbits by applying our methods is obviously higher than that by applying the same order formula of the symmetric methods when the integration time-interval is not greater than 12000 periods.

  13. Fast and stable numerical method for neuronal modelling

    NASA Astrophysics Data System (ADS)

    Hashemi, Soheil; Abdolali, Ali

    2016-11-01

    Excitable cell modelling is of a prime interest in predicting and targeting neural activity. Two main limits in solving related equations are speed and stability of numerical method. Since there is a tradeoff between accuracy and speed, most previously presented methods for solving partial differential equations (PDE) are focused on one side. More speed means more accurate simulations and therefore better device designing. By considering the variables in finite differenced equation in proper time and calculating the unknowns in the specific sequence, a fast, stable and accurate method is introduced in this paper for solving neural partial differential equations. Propagation of action potential in giant axon is studied by proposed method and traditional methods. Speed, consistency and stability of the methods are compared and discussed. The proposed method is as fast as forward methods and as stable as backward methods. Forward methods are known as fastest methods and backward methods are stable in any circumstances. Complex structures can be simulated by proposed method due to speed and stability of the method.

  14. A new class of accurate, mesh-free hydrodynamic simulation methods

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2015-06-01

    We present two new Lagrangian methods for hydrodynamics, in a systematic comparison with moving-mesh, smoothed particle hydrodynamics (SPH), and stationary (non-moving) grid methods. The new methods are designed to simultaneously capture advantages of both SPH and grid-based/adaptive mesh refinement (AMR) schemes. They are based on a kernel discretization of the volume coupled to a high-order matrix gradient estimator and a Riemann solver acting over the volume `overlap'. We implement and test a parallel, second-order version of the method with self-gravity and cosmological integration, in the code GIZMO:1 this maintains exact mass, energy and momentum conservation; exhibits superior angular momentum conservation compared to all other methods we study; does not require `artificial diffusion' terms; and allows the fluid elements to move with the flow, so resolution is automatically adaptive. We consider a large suite of test problems, and find that on all problems the new methods appear competitive with moving-mesh schemes, with some advantages (particularly in angular momentum conservation), at the cost of enhanced noise. The new methods have many advantages versus SPH: proper convergence, good capturing of fluid-mixing instabilities, dramatically reduced `particle noise' and numerical viscosity, more accurate sub-sonic flow evolution, and sharp shock-capturing. Advantages versus non-moving meshes include: automatic adaptivity, dramatically reduced advection errors and numerical overmixing, velocity-independent errors, accurate coupling to gravity, good angular momentum conservation and elimination of `grid alignment' effects. We can, for example, follow hundreds of orbits of gaseous discs, while AMR and SPH methods break down in a few orbits. However, fixed meshes minimize `grid noise'. These differences are important for a range of astrophysical problems.

  15. Consisitent and Accurate Finite Volume Methods for Coupled Flow and Geomechanics

    NASA Astrophysics Data System (ADS)

    Nordbotten, J. M.

    2014-12-01

    We introduce a new class of cell-centered finite volume methods for elasticity and poro-elasticity. As compared to lowest-order finite element discretizations, the new discretization has no additional degrees of freedom, and yet gives more accurate stress and flow fields. This finite volume discretization methods has furthermore the advantage that the mechanical discretization is fully compatible (in terms of grid and variables) with the standard cell-centered finite volume discretizations that are prevailing for commercial simulation of multi-phase flows in porous media. Theoretical analysis proves the convergence of the method. We give results showing that so-called numerical locking is avoided for a large class of structured and unstructured grids. The results are valid in both two and three spatial dimensions. The talk concludes with applications to problems with coupled multi-phase flow, transport and deformation, together with fractured porous media.

  16. Hyperbolic conservation laws and numerical methods

    NASA Technical Reports Server (NTRS)

    Leveque, Randall J.

    1990-01-01

    The mathematical structure of hyperbolic systems and the scalar equation case of conservation laws are discussed. Linear, nonlinear systems and the Riemann problem for the Euler equations are also studied. The numerical methods for conservation laws are presented in a nonstandard manner which leads to large time steps generalizations and computations on irregular grids. The solution of conservation laws with stiff source terms is examined.

  17. Numerical Methods of Computational Electromagnetics for Complex Inhomogeneous Systems

    SciTech Connect

    Cai, Wei

    2014-05-15

    Understanding electromagnetic phenomena is the key in many scientific investigation and engineering designs such as solar cell designs, studying biological ion channels for diseases, and creating clean fusion energies, among other things. The objectives of the project are to develop high order numerical methods to simulate evanescent electromagnetic waves occurring in plasmon solar cells and biological ion-channels, where local field enhancement within random media in the former and long range electrostatic interactions in the latter are of major challenges for accurate and efficient numerical computations. We have accomplished these objectives by developing high order numerical methods for solving Maxwell equations such as high order finite element basis for discontinuous Galerkin methods, well-conditioned Nedelec edge element method, divergence free finite element basis for MHD, and fast integral equation methods for layered media. These methods can be used to model the complex local field enhancement in plasmon solar cells. On the other hand, to treat long range electrostatic interaction in ion channels, we have developed image charge based method for a hybrid model in combining atomistic electrostatics and continuum Poisson-Boltzmann electrostatics. Such a hybrid model will speed up the molecular dynamics simulation of transport in biological ion-channels.

  18. Method accurately measures mean particle diameters of monodisperse polystyrene latexes

    NASA Technical Reports Server (NTRS)

    Kubitschek, H. E.

    1967-01-01

    Photomicrographic method determines mean particle diameters of monodisperse polystyrene latexes. Many diameters are measured simultaneously by measuring row lengths of particles in a triangular array at a glass-oil interface. The method provides size standards for electronic particle counters and prevents distortions, softening, and flattening.

  19. Accurate calculation of Coulomb sums: Efficacy of Pade-like methods

    SciTech Connect

    Sarkar, B. ); Bhattacharyya, K. )

    1993-09-01

    The adequacy of numerical sequence accelerative transforms in providing accurate estimates of Coulomb sums is considered, referring particularly to distorted lattices. Performance of diagonal Pade approximants (DPA) in this context is critically assessed. Failure in the case of lattice vacancies is also demonstrated. The method of multiple-point Pade approximants (MPA) has been introduced for slowly convergent sequences and is shown to work well for both regular and distorted lattices, the latter being due either to impurities or vacancies. Viability of the two methods is also compared. In divergent situations with distortions owing to vacancies, a strategy of obtaining reliable results by separate applications of both DPA and MPA at appropriate places is also sketched. Representative calculations involve two basic cubic-lattice sums, one slowly convergent and the other divergent, from which very good quality estimates of Madelung constants for a number of common lattices follow.

  20. A Fully Implicit Time Accurate Method for Hypersonic Combustion: Application to Shock-induced Combustion Instability

    NASA Technical Reports Server (NTRS)

    Yungster, Shaye; Radhakrishnan, Krishnan

    1994-01-01

    A new fully implicit, time accurate algorithm suitable for chemically reacting, viscous flows in the transonic-to-hypersonic regime is described. The method is based on a class of Total Variation Diminishing (TVD) schemes and uses successive Gauss-Siedel relaxation sweeps. The inversion of large matrices is avoided by partitioning the system into reacting and nonreacting parts, but still maintaining a fully coupled interaction. As a result, the matrices that have to be inverted are of the same size as those obtained with the commonly used point implicit methods. In this paper we illustrate the applicability of the new algorithm to hypervelocity unsteady combustion applications. We present a series of numerical simulations of the periodic combustion instabilities observed in ballistic-range experiments of blunt projectiles flying at subdetonative speeds through hydrogen-air mixtures. The computed frequencies of oscillation are in excellent agreement with experimental data.

  1. Construction of higher order accurate vortex and particle methods

    NASA Technical Reports Server (NTRS)

    Nicolaides, R. A.

    1986-01-01

    The standard point vortex method has recently been shown to be of high order of accuracy for problems on the whole plane, when using a uniform initial subdivision for assigning the vorticity to the points. If obstacles are present in the flow, this high order deteriorates to first or second order. New vortex methods are introduced which are of arbitrary accuracy (under regularity assumptions) regardless of the presence of bodies and the uniformity of the initial subdivision.

  2. RELAP-7 Numerical Stabilization: Entropy Viscosity Method

    SciTech Connect

    R. A. Berry; M. O. Delchini; J. Ragusa

    2014-06-01

    The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at the Idaho National Laboratory (INL). The code is based on the INL's modern scientific software development framework, MOOSE (Multi-Physics Object Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty years of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5's capability and extends the analysis capability for all reactor system simulation scenarios. RELAP-7 utilizes a single phase and a novel seven-equation two-phase flow models as described in the RELAP-7 Theory Manual (INL/EXT-14-31366). The basic equation systems are hyperbolic, which generally require some type of stabilization (or artificial viscosity) to capture nonlinear discontinuities and to suppress advection-caused oscillations. This report documents one of the available options for this stabilization in RELAP-7 -- a new and novel approach known as the entropy viscosity method. Because the code is an ongoing development effort in which the physical sub models, numerics, and coding are evolving, so too must the specific details of the entropy viscosity stabilization method. Here the fundamentals of the method in their current state are presented.

  3. Aeroacoustic Flow Phenomena Accurately Captured by New Computational Fluid Dynamics Method

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.

    2002-01-01

    One of the challenges in the computational fluid dynamics area is the accurate calculation of aeroacoustic phenomena, especially in the presence of shock waves. One such phenomenon is "transonic resonance," where an unsteady shock wave at the throat of a convergent-divergent nozzle results in the emission of acoustic tones. The space-time Conservation-Element and Solution-Element (CE/SE) method developed at the NASA Glenn Research Center can faithfully capture the shock waves, their unsteady motion, and the generated acoustic tones. The CE/SE method is a revolutionary new approach to the numerical modeling of physical phenomena where features with steep gradients (e.g., shock waves, phase transition, etc.) must coexist with those having weaker variations. The CE/SE method does not require the complex interpolation procedures (that allow for the possibility of a shock between grid cells) used by many other methods to transfer information between grid cells. These interpolation procedures can add too much numerical dissipation to the solution process. Thus, while shocks are resolved, weaker waves, such as acoustic waves, are washed out.

  4. A New Method for Accurate Treatment of Flow Equations in Cylindrical Coordinates Using Series Expansions

    NASA Technical Reports Server (NTRS)

    Constantinescu, G.S.; Lele, S. K.

    2000-01-01

    The motivation of this work is the ongoing effort at the Center for Turbulence Research (CTR) to use large eddy simulation (LES) techniques to calculate the noise radiated by jet engines. The focus on engine exhaust noise reduction is motivated by the fact that a significant reduction has been achieved over the last decade on the other main sources of acoustic emissions of jet engines, such as the fan and turbomachinery noise, which gives increased priority to jet noise. To be able to propose methods to reduce the jet noise based on results of numerical simulations, one first has to be able to accurately predict the spatio-temporal distribution of the noise sources in the jet. Though a great deal of understanding of the fundamental turbulence mechanisms in high-speed jets was obtained from direct numerical simulations (DNS) at low Reynolds numbers, LES seems to be the only realistic available tool to obtain the necessary near-field information that is required to estimate the acoustic radiation of the turbulent compressible engine exhaust jets. The quality of jet-noise predictions is determined by the accuracy of the numerical method that has to capture the wide range of pressure fluctuations associated with the turbulence in the jet and with the resulting radiated noise, and by the boundary condition treatment and the quality of the mesh. Higher Reynolds numbers and coarser grids put in turn a higher burden on the robustness and accuracy of the numerical method used in this kind of jet LES simulations. As these calculations are often done in cylindrical coordinates, one of the most important requirements for the numerical method is to provide a flow solution that is not contaminated by numerical artifacts. The coordinate singularity is known to be a source of such artifacts. In the present work we use 6th order Pade schemes in the non-periodic directions to discretize the full compressible flow equations. It turns out that the quality of jet-noise predictions

  5. Accurate Adaptive Level Set Method and Sharpening Technique for Three Dimensional Deforming Interfaces

    NASA Technical Reports Server (NTRS)

    Kim, Hyoungin; Liou, Meng-Sing

    2011-01-01

    In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems

  6. Joint iris boundary detection and fit: a real-time method for accurate pupil tracking.

    PubMed

    Barbosa, Marconi; James, Andrew C

    2014-08-01

    A range of applications in visual science rely on accurate tracking of the human pupil's movement and contraction in response to light. While the literature for independent contour detection and fitting of the iris-pupil boundary is vast, a joint approach, in which it is assumed that the pupil has a given geometric shape has been largely overlooked. We present here a global method for simultaneously finding and fitting of an elliptic or circular contour against a dark interior, which produces consistently accurate results even under non-ideal recording conditions, such as reflections near and over the boundary, droopy eye lids, or the sudden formation of tears. The specific form of the proposed optimization problem allows us to write down closed analytic formulae for the gradient and the Hessian of the objective function. Moreover, both the objective function and its derivatives can be cast into vectorized form, making the proposed algorithm significantly faster than its closest relative in the literature. We compare methods in multiple ways, both analytically and numerically, using real iris images as well as idealizations of the iris for which the ground truth boundary is precisely known. The method proposed here is illustrated under challenging recording conditions and it is shown to be robust. PMID:25136477

  7. Numerical methods for engine-airframe integration

    SciTech Connect

    Murthy, S.N.B.; Paynter, G.C.

    1986-01-01

    Various papers on numerical methods for engine-airframe integration are presented. The individual topics considered include: scientific computing environment for the 1980s, overview of prediction of complex turbulent flows, numerical solutions of the compressible Navier-Stokes equations, elements of computational engine/airframe integrations, computational requirements for efficient engine installation, application of CAE and CFD techniques to complete tactical missile design, CFD applications to engine/airframe integration, and application of a second-generation low-order panel methods to powerplant installation studies. Also addressed are: three-dimensional flow analysis of turboprop inlet and nacelle configurations, application of computational methods to the design of large turbofan engine nacelles, comparison of full potential and Euler solution algorithms for aeropropulsive flow field computations, subsonic/transonic, supersonic nozzle flows and nozzle integration, subsonic/transonic prediction capabilities for nozzle/afterbody configurations, three-dimensional viscous design methodology of supersonic inlet systems for advanced technology aircraft, and a user's technology assessment.

  8. Numerical analysis method for linear induction machines.

    NASA Technical Reports Server (NTRS)

    Elliott, D. G.

    1972-01-01

    A numerical analysis method has been developed for linear induction machines such as liquid metal MHD pumps and generators and linear motors. Arbitrary phase currents or voltages can be specified and the moving conductor can have arbitrary velocity and conductivity variations from point to point. The moving conductor is divided into a mesh and coefficients are calculated for the voltage induced at each mesh point by unit current at every other mesh point. Combining the coefficients with the mesh resistances yields a set of simultaneous equations which are solved for the unknown currents.

  9. Numerical methods for finding stationary gravitational solutions

    NASA Astrophysics Data System (ADS)

    Dias, Óscar J. C.; Santos, Jorge E.; Way, Benson

    2016-07-01

    The wide applications of higher dimensional gravity and gauge/gravity duality have fuelled the search for new stationary solutions of the Einstein equation (possibly coupled to matter). In this topical review, we explain the mathematical foundations and give a practical guide for the numerical solution of gravitational boundary value problems. We present these methods by way of example: resolving asymptotically flat black rings, singly spinning lumpy black holes in anti-de Sitter (AdS), and the Gregory–Laflamme zero modes of small rotating black holes in AdS{}5× {S}5. We also include several tools and tricks that have been useful throughout the literature.

  10. Numerical Methodology for Coupled Time-Accurate Simulations of Primary and Secondary Flowpaths in Gas Turbines

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Athavale, M. M.; Hendricks, R. C.; Steinetz, B. M.

    2006-01-01

    Detailed information of the flow-fields in the secondary flowpaths and their interaction with the primary flows in gas turbine engines is necessary for successful designs with optimized secondary flow streams. Present work is focused on the development of a simulation methodology for coupled time-accurate solutions of the two flowpaths. The secondary flowstream is treated using SCISEAL, an unstructured adaptive Cartesian grid code developed for secondary flows and seals, while the mainpath flow is solved using TURBO, a density based code with capability of resolving rotor-stator interaction in multi-stage machines. An interface is being tested that links the two codes at the rim seal to allow data exchange between the two codes for parallel, coupled execution. A description of the coupling methodology and the current status of the interface development is presented. Representative steady-state solutions of the secondary flow in the UTRC HP Rig disc cavity are also presented.

  11. Differential-equation-based representation of truncation errors for accurate numerical simulation

    NASA Astrophysics Data System (ADS)

    MacKinnon, Robert J.; Johnson, Richard W.

    1991-09-01

    High-order compact finite difference schemes for 2D convection-diffusion-type differential equations with constant and variable convection coefficients are derived. The governing equations are employed to represent leading truncation terms, including cross-derivatives, making the overall O(h super 4) schemes conform to a 3 x 3 stencil. It is shown that the two-dimensional constant coefficient scheme collapses to the optimal scheme for the one-dimensional case wherein the finite difference equation yields nodally exact results. The two-dimensional schemes are tested against standard model problems, including a Navier-Stokes application. Results show that the two schemes are generally more accurate, on comparable grids, than O(h super 2) centered differencing and commonly used O(h) and O(h super 3) upwinding schemes.

  12. How Accurately Do Spectral Methods Estimate Effective Elastic Thickness?

    NASA Astrophysics Data System (ADS)

    Perez-Gussinye, M.; Lowry, A. R.; Watts, A. B.; Velicogna, I.

    2002-12-01

    The effective elastic thickness, Te, is an important parameter that has the potential to provide information on the long-term thermal and mechanical properties of the the lithosphere. Previous studies have estimated Te using both forward and inverse (spectral) methods. While there is generally good agreement between the results obtained using these methods, spectral methods are limited because they depend on the spectral estimator and the window size chosen for analysis. In order to address this problem, we have used a multitaper technique which yields optimal estimates of the bias and variance of the Bouguer coherence function relating topography and gravity anomaly data. The technique has been tested using realistic synthetic topography and gravity. Synthetic data were generated assuming surface and sub-surface (buried) loading of an elastic plate with fractal statistics consistent with real data sets. The cases of uniform and spatially varying Te are examined. The topography and gravity anomaly data consist of 2000x2000 km grids sampled at 8 km interval. The bias in the Te estimate is assessed from the difference between the true Te value and the mean from analyzing 100 overlapping windows within the 2000x2000 km data grids. For the case in which Te is uniform, the bias and variance decrease with window size and increase with increasing true Te value. In the case of a spatially varying Te, however, there is a trade-off between spatial resolution and variance. With increasing window size the variance of the Te estimate decreases, but the spatial changes in Te are smeared out. We find that for a Te distribution consisting of a strong central circular region of Te=50 km (radius 600 km) and progressively smaller Te towards its edges, the 800x800 and 1000x1000 km window gave the best compromise between spatial resolution and variance. Our studies demonstrate that assumed stationarity of the relationship between gravity and topography data yields good results even in

  13. An adaptive, formally second order accurate version of the immersed boundary method

    NASA Astrophysics Data System (ADS)

    Griffith, Boyce E.; Hornung, Richard D.; McQueen, David M.; Peskin, Charles S.

    2007-04-01

    Like many problems in biofluid mechanics, cardiac mechanics can be modeled as the dynamic interaction of a viscous incompressible fluid (the blood) and a (visco-)elastic structure (the muscular walls and the valves of the heart). The immersed boundary method is a mathematical formulation and numerical approach to such problems that was originally introduced to study blood flow through heart valves, and extensions of this work have yielded a three-dimensional model of the heart and great vessels. In the present work, we introduce a new adaptive version of the immersed boundary method. This adaptive scheme employs the same hierarchical structured grid approach (but a different numerical scheme) as the two-dimensional adaptive immersed boundary method of Roma et al. [A multilevel self adaptive version of the immersed boundary method, Ph.D. Thesis, Courant Institute of Mathematical Sciences, New York University, 1996; An adaptive version of the immersed boundary method, J. Comput. Phys. 153 (2) (1999) 509-534] and is based on a formally second order accurate (i.e., second order accurate for problems with sufficiently smooth solutions) version of the immersed boundary method that we have recently described [B.E. Griffith, C.S. Peskin, On the order of accuracy of the immersed boundary method: higher order convergence rates for sufficiently smooth problems, J. Comput. Phys. 208 (1) (2005) 75-105]. Actual second order convergence rates are obtained for both the uniform and adaptive methods by considering the interaction of a viscous incompressible flow and an anisotropic incompressible viscoelastic shell. We also present initial results from the application of this methodology to the three-dimensional simulation of blood flow in the heart and great vessels. The results obtained by the adaptive method show good qualitative agreement with simulation results obtained by earlier non-adaptive versions of the method, but the flow in the vicinity of the model heart valves

  14. Application of numerical methods to elasticity imaging.

    PubMed

    Castaneda, Benjamin; Ormachea, Juvenal; Rodríguez, Paul; Parker, Kevin J

    2013-03-01

    Elasticity imaging can be understood as the intersection of the study of biomechanical properties, imaging sciences, and physics. It was mainly motivated by the fact that pathological tissue presents an increased stiffness when compared to surrounding normal tissue. In the last two decades, research on elasticity imaging has been an international and interdisciplinary pursuit aiming to map the viscoelastic properties of tissue in order to provide clinically useful information. As a result, several modalities of elasticity imaging, mostly based on ultrasound but also on magnetic resonance imaging and optical coherence tomography, have been proposed and applied to a number of clinical applications: cancer diagnosis (prostate, breast, liver), hepatic cirrhosis, renal disease, thyroiditis, arterial plaque evaluation, wall stiffness in arteries, evaluation of thrombosis in veins, and many others. In this context, numerical methods are applied to solve forward and inverse problems implicit in the algorithms in order to estimate viscoelastic linear and nonlinear parameters, especially for quantitative elasticity imaging modalities. In this work, an introduction to elasticity imaging modalities is presented. The working principle of qualitative modalities (sonoelasticity, strain elastography, acoustic radiation force impulse) and quantitative modalities (Crawling Waves Sonoelastography, Spatially Modulated Ultrasound Radiation Force (SMURF), Supersonic Imaging) will be explained. Subsequently, the areas in which numerical methods can be applied to elasticity imaging are highlighted and discussed. Finally, we present a detailed example of applying total variation and AM-FM techniques to the estimation of elasticity. PMID:24010245

  15. Mathematica with a Numerical Methods Course

    NASA Astrophysics Data System (ADS)

    Varley, Rodney

    2003-04-01

    An interdisciplinary "Numerical Methods" course has been shared between physics, mathematics and computer science since 1992 at Hunter C. Recently, the lectures and workshops for this course have become formalized and placed on the internet at http://www.ph.hunter.cuny.edu (follow the links "Course Listings and Websites" >> "PHYS385 (Numerical Methods)". Mathematica notebooks for the lectures are available for automatic download (by "double clicking" the lecture icon) for student use in the classroom or at home. AOL (or Netscape/Explorer) can be used provided Mathematica (or the "free" MathReader) has been made a "helper application". Using Mathematica has the virtue that mathematical equations (no LaTex required) can easily be included with the text and Mathematica's graphing is easy to use. Computational cells can be included within the notebook and students may easily modify the calculation to see the result of "what if..." questions. Homework is sent as Mathematica notebooks to the instructor via the internet and the corrected workshops are returned in the same manner. Most exam questions require computational solutions.

  16. NMR method for accurate quantification of polysorbate 80 copolymer composition.

    PubMed

    Zhang, Qi; Wang, Aifa; Meng, Yang; Ning, Tingting; Yang, Huaxin; Ding, Lixia; Xiao, Xinyue; Li, Xiaodong

    2015-10-01

    (13)C NMR spectroscopic integration employing short relaxation delays and a 30° pulse width was evaluated as a quantitative tool for analyzing the components of polysorbate 80. (13)C NMR analysis revealed that commercial polysorbate 80 formulations are a complex oligomeric mixture of sorbitan polyethoxylate esters and other intermediates, such as isosorbide polyethoxylate esters and poly(ethylene glycol) (PEG) esters. This novel approach facilitates the quantification of the component ratios. In this study, the ratios of the three major oligomers in polysorbate 80 were measured and the PEG series was found to be the major component of commercial polysorbate 80. The degree of polymerization of -CH2CH2O- groups and the ratio of free to bonded -CH2CH2O- end groups, which correlate with the hydrophilic/hydrophobic nature of the polymer, were analyzed, and were suggested to be key factors for assessing the likelihood of adverse biological reactions to polysorbate 80. The (13)C NMR data suggest that the feed ratio of raw materials and reaction conditions in the production of polysorbate 80 are not well controlled. Our results demonstrate that (13)C NMR is a universal, powerful tool for polysorbate analysis. Such analysis is crucial for the synthesis of a high-quality product, and is difficult to obtain by other methods. PMID:26356097

  17. Temperature dependent effective potential method for accurate free energy calculations of solids

    NASA Astrophysics Data System (ADS)

    Hellman, Olle; Steneteg, Peter; Abrikosov, I. A.; Simak, S. I.

    2013-03-01

    We have developed a thorough and accurate method of determining anharmonic free energies, the temperature dependent effective potential technique (TDEP). It is based on ab initio molecular dynamics followed by a mapping onto a model Hamiltonian that describes the lattice dynamics. The formalism and the numerical aspects of the technique are described in detail. A number of practical examples are given, and results are presented, which confirm the usefulness of TDEP within ab initio and classical molecular dynamics frameworks. In particular, we examine from first principles the behavior of force constants upon the dynamical stabilization of the body centered phase of Zr, and show that they become more localized. We also calculate the phase diagram for 4He modeled with the Aziz potential and obtain results which are in favorable agreement both with respect to experiment and established techniques.

  18. Numerically accurate linear response-properties in the configuration-interaction singles (CIS) approximation.

    PubMed

    Kottmann, Jakob S; Höfener, Sebastian; Bischoff, Florian A

    2015-12-21

    In the present work, we report an efficient implementation of configuration interaction singles (CIS) excitation energies and oscillator strengths using the multi-resolution analysis (MRA) framework to address the basis-set convergence of excited state computations. In MRA (ground-state) orbitals, excited states are constructed adaptively guaranteeing an overall precision. Thus not only valence but also, in particular, low-lying Rydberg states can be computed with consistent quality at the basis set limit a priori, or without special treatments, which is demonstrated using a small test set of organic molecules, basis sets, and states. We find that the new implementation of MRA-CIS excitation energy calculations is competitive with conventional LCAO calculations when the basis-set limit of medium-sized molecules is sought, which requires large, diffuse basis sets. This becomes particularly important if accurate calculations of molecular electronic absorption spectra with respect to basis-set incompleteness are required, in which both valence as well as Rydberg excitations can contribute to the molecule's UV/VIS fingerprint. PMID:25913482

  19. Time-Accurate, Unstructured-Mesh Navier-Stokes Computations with the Space-Time CESE Method

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan

    2006-01-01

    Application of the newly emerged space-time conservation element solution element (CESE) method to compressible Navier-Stokes equations is studied. In contrast to Euler equations solvers, several issues such as boundary conditions, numerical dissipation, and grid stiffness warrant systematic investigations and validations. Non-reflecting boundary conditions applied at the truncated boundary are also investigated from the stand point of acoustic wave propagation. Validations of the numerical solutions are performed by comparing with exact solutions for steady-state as well as time-accurate viscous flow problems. The test cases cover a broad speed regime for problems ranging from acoustic wave propagation to 3D hypersonic configurations. Model problems pertinent to hypersonic configurations demonstrate the effectiveness of the CESE method in treating flows with shocks, unsteady waves, and separations. Good agreement with exact solutions suggests that the space-time CESE method provides a viable alternative for time-accurate Navier-Stokes calculations of a broad range of problems.

  20. The use of experimental bending tests to more accurate numerical description of TBC damage process

    NASA Astrophysics Data System (ADS)

    Sadowski, T.; Golewski, P.

    2016-04-01

    Thermal barrier coatings (TBCs) have been extensively used in aircraft engines to protect critical engine parts such as blades and combustion chambers, which are exposed to high temperatures and corrosive environment. The blades of turbine engines are additionally exposed to high mechanical loads. These loads are created by the high rotational speed of the rotor (30 000 rot/min), causing the tensile and bending stresses. Therefore, experimental testing of coated samples is necessary in order to determine strength properties of TBCs. Beam samples with dimensions 50×10×2 mm were used in those studies. The TBC system consisted of 150 μm thick bond coat (NiCoCrAlY) and 300 μm thick top coat (YSZ) made by APS (air plasma spray) process. Samples were tested by three-point bending test with various loads. After bending tests, the samples were subjected to microscopic observation to determine the quantity of cracks and their depth. The above mentioned results were used to build numerical model and calibrate material data in Abaqus program. Brittle cracking damage model was applied for the TBC layer, which allows to remove elements after reaching criterion. Surface based cohesive behavior was used to model the delamination which may occur at the boundary between bond coat and top coat.

  1. A Method for Accurate in silico modeling of Ultrasound Transducer Arrays

    PubMed Central

    Guenther, Drake A.; Walker, William F.

    2009-01-01

    This paper presents a new approach to improve the in silico modeling of ultrasound transducer arrays. While current simulation tools accurately predict the theoretical element spatio-temporal pressure response, transducers do not always behave as theorized. In practice, using the probe's physical dimensions and published specifications in silico, often results in unsatisfactory agreement between simulation and experiment. We describe a general optimization procedure used to maximize the correlation between the observed and simulated spatio-temporal response of a pulsed single element in a commercial ultrasound probe. A linear systems approach is employed to model element angular sensitivity, lens effects, and diffraction phenomena. A numerical deconvolution method is described to characterize the intrinsic electro-mechanical impulse response of the element. Once the response of the element and optimal element characteristics are known, prediction of the pressure response for arbitrary apertures and excitation signals is performed through direct convolution using available tools. We achieve a correlation of 0.846 between the experimental emitted waveform and simulated waveform when using the probe's physical specifications in silico. A far superior correlation of 0.988 is achieved when using the optimized in silico model. Electronic noise appears to be the main effect preventing the realization of higher correlation coefficients. More accurate in silico modeling will improve the evaluation and design of ultrasound transducers as well as aid in the development of sophisticated beamforming strategies. PMID:19041997

  2. How Accurate are the Extremely Small P-values Used in Genomic Research: An Evaluation of Numerical Libraries

    PubMed Central

    Bangalore, Sai Santosh; Wang, Jelai; Allison, David B.

    2009-01-01

    In the fields of genomics and high dimensional biology (HDB), massive multiple testing prompts the use of extremely small significance levels. Because tail areas of statistical distributions are needed for hypothesis testing, the accuracy of these areas is important to confidently make scientific judgments. Previous work on accuracy was primarily focused on evaluating professionally written statistical software, like SAS, on the Statistical Reference Datasets (StRD) provided by National Institute of Standards and Technology (NIST) and on the accuracy of tail areas in statistical distributions. The goal of this paper is to provide guidance to investigators, who are developing their own custom scientific software built upon numerical libraries written by others. In specific, we evaluate the accuracy of small tail areas from cumulative distribution functions (CDF) of the Chi-square and t-distribution by comparing several open-source, free, or commercially licensed numerical libraries in Java, C, and R to widely accepted standards of comparison like ELV and DCDFLIB. In our evaluation, the C libraries and R functions are consistently accurate up to six significant digits. Amongst the evaluated Java libraries, Colt is most accurate. These languages and libraries are popular choices among programmers developing scientific software, so the results herein can be useful to programmers in choosing libraries for CDF accuracy. PMID:20161126

  3. Extracting accurate strain measurements in bone mechanics: A critical review of current methods.

    PubMed

    Grassi, Lorenzo; Isaksson, Hanna

    2015-10-01

    Osteoporosis related fractures are a social burden that advocates for more accurate fracture prediction methods. Mechanistic methods, e.g. finite element models, have been proposed as a tool to better predict bone mechanical behaviour and strength. However, there is little consensus about the optimal constitutive law to describe bone as a material. Extracting reliable and relevant strain data from experimental tests is of fundamental importance to better understand bone mechanical properties, and to validate numerical models. Several techniques have been used to measure strain in experimental mechanics, with substantial differences in terms of accuracy, precision, time- and length-scale. Each technique presents upsides and downsides that must be carefully evaluated when designing the experiment. Moreover, additional complexities are often encountered when applying such strain measurement techniques to bone, due to its complex composite structure. This review of literature examined the four most commonly adopted methods for strain measurements (strain gauges, fibre Bragg grating sensors, digital image correlation, and digital volume correlation), with a focus on studies with bone as a substrate material, at the organ and tissue level. For each of them the working principles, a summary of the main applications to bone mechanics at the organ- and tissue-level, and a list of pros and cons are provided. PMID:26099201

  4. Conservative high-order-accurate finite-difference methods for curvilinear grids

    NASA Technical Reports Server (NTRS)

    Rai, Man M.; Chakrvarthy, Sukumar

    1993-01-01

    Two fourth-order-accurate finite-difference methods for numerically solving hyperbolic systems of conservation equations on smooth curvilinear grids are presented. The first method uses the differential form of the conservation equations; the second method uses the integral form of the conservation equations. Modifications to these schemes, which are required near boundaries to maintain overall high-order accuracy, are discussed. An analysis that demonstrates the stability of the modified schemes is also provided. Modifications to one of the schemes to make it total variation diminishing (TVD) are also discussed. Results that demonstrate the high-order accuracy of both schemes are included in the paper. In particular, a Ringleb-flow computation demonstrates the high-order accuracy and the stability of the boundary and near-boundary procedures. A second computation of supersonic flow over a cylinder demonstrates the shock-capturing capability of the TVD methodology. An important contribution of this paper is the dear demonstration that higher order accuracy leads to increased computational efficiency.

  5. Analysis and accurate numerical solutions of the integral equation derived from the linearized BGKW equation for the steady Couette flow

    NASA Astrophysics Data System (ADS)

    Jiang, Shidong; Luo, Li-Shi

    2016-07-01

    The integral equation for the flow velocity u (x ; k) in the steady Couette flow derived from the linearized Bhatnagar-Gross-Krook-Welander kinetic equation is studied in detail both theoretically and numerically in a wide range of the Knudsen number k between 0.003 and 100.0. First, it is shown that the integral equation is a Fredholm equation of the second kind in which the norm of the compact integral operator is less than 1 on Lp for any 1 ≤ p ≤ ∞ and thus there exists a unique solution to the integral equation via the Neumann series. Second, it is shown that the solution is logarithmically singular at the endpoints. More precisely, if x = 0 is an endpoint, then the solution can be expanded as a double power series of the form ∑n=0∞∑m=0∞cn,mxn(xln ⁡ x) m about x = 0 on a small interval x ∈ (0 , a) for some a > 0. And third, a high-order adaptive numerical algorithm is designed to compute the solution numerically to high precision. The solutions for the flow velocity u (x ; k), the stress Pxy (k), and the half-channel mass flow rate Q (k) are obtained in a wide range of the Knudsen number 0.003 ≤ k ≤ 100.0; and these solutions are accurate for at least twelve significant digits or better, thus they can be used as benchmark solutions.

  6. Numerical methods for analyzing electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Lee, S. W.; Lo, Y. T.; Chuang, S. L.; Lee, C. S.

    1985-01-01

    Numerical methods to analyze electromagnetic scattering are presented. The dispersions and attenuations of the normal modes in a circular waveguide coated with lossy material were completely analyzed. The radar cross section (RCS) from a circular waveguide coated with lossy material was calculated. The following is observed: (1) the interior irradiation contributes to the RCS much more than does the rim diffraction; (2) at low frequency, the RCS from the circular waveguide terminated by a perfect electric conductor (PEC) can be reduced more than 13 dB down with a coating thickness less than 1% of the radius using the best lossy material available in a 6 radius-long cylinder; (3) at high frequency, a modal separation between the highly attenuated and the lowly attenuated modes is evident if the coating material is too lossy, however, a large RCS reduction can be achieved for a small incident angle with a thin layer of coating. It is found that the waveguide coated with a lossy magnetic material can be used as a substitute for a corrugated waveguide to produce a circularly polarized radiation yield.

  7. Numerical solution of a diffusion problem by exponentially fitted finite difference methods.

    PubMed

    D'Ambrosio, Raffaele; Paternoster, Beatrice

    2014-01-01

    This paper is focused on the accurate and efficient solution of partial differential differential equations modelling a diffusion problem by means of exponentially fitted finite difference numerical methods. After constructing and analysing special purpose finite differences for the approximation of second order partial derivatives, we employed them in the numerical solution of a diffusion equation with mixed boundary conditions. Numerical experiments reveal that a special purpose integration, both in space and in time, is more accurate and efficient than that gained by employing a general purpose solver. PMID:26034665

  8. Voronoi-cell finite difference method for accurate electronic structure calculation of polyatomic molecules on unstructured grids

    SciTech Connect

    Son, Sang-Kil

    2011-03-01

    We introduce a new numerical grid-based method on unstructured grids in the three-dimensional real-space to investigate the electronic structure of polyatomic molecules. The Voronoi-cell finite difference (VFD) method realizes a discrete Laplacian operator based on Voronoi cells and their natural neighbors, featuring high adaptivity and simplicity. To resolve multicenter Coulomb singularity in all-electron calculations of polyatomic molecules, this method utilizes highly adaptive molecular grids which consist of spherical atomic grids. It provides accurate and efficient solutions for the Schroedinger equation and the Poisson equation with the all-electron Coulomb potentials regardless of the coordinate system and the molecular symmetry. For numerical examples, we assess accuracy of the VFD method for electronic structures of one-electron polyatomic systems, and apply the method to the density-functional theory for many-electron polyatomic molecules.

  9. An accurate and efficient method for prediction of the long-term evolution of space debris in the geosynchronous region

    NASA Astrophysics Data System (ADS)

    McNamara, Roger P.; Eagle, C. D.

    1992-08-01

    Planetary Observer High Accuracy Orbit Prediction Program (POHOP), an existing numerical integrator, was modified with the solar and lunar formulae developed by T.C. Van Flandern and K.F. Pulkkinen to provide the accuracy required to evaluate long-term orbit characteristics of objects on the geosynchronous region. The orbit of a 1000 kg class spacecraft is numerically integrated over 50 years using both the original and the more accurate solar and lunar ephemerides methods. Results of this study demonstrate that, over the long term, for an object located in the geosynchronous region, the more accurate solar and lunar ephemerides effects on the objects's position are significantly different than using the current POHOP ephemeris.

  10. Estimation method of point spread function based on Kalman filter for accurately evaluating real optical properties of photonic crystal fibers.

    PubMed

    Shen, Yan; Lou, Shuqin; Wang, Xin

    2014-03-20

    The evaluation accuracy of real optical properties of photonic crystal fibers (PCFs) is determined by the accurate extraction of air hole edges from microscope images of cross sections of practical PCFs. A novel estimation method of point spread function (PSF) based on Kalman filter is presented to rebuild the micrograph image of the PCF cross-section and thus evaluate real optical properties for practical PCFs. Through tests on both artificially degraded images and microscope images of cross sections of practical PCFs, we prove that the proposed method can achieve more accurate PSF estimation and lower PSF variance than the traditional Bayesian estimation method, and thus also reduce the defocus effect. With this method, we rebuild the microscope images of two kinds of commercial PCFs produced by Crystal Fiber and analyze the real optical properties of these PCFs. Numerical results are in accord with the product parameters. PMID:24663461

  11. Fast Numerical Methods for the Design of Layered Photonic Structures with Rough Interfaces

    NASA Technical Reports Server (NTRS)

    Komarevskiy, Nikolay; Braginsky, Leonid; Shklover, Valery; Hafner, Christian; Lawson, John

    2011-01-01

    Modified boundary conditions (MBC) and a multilayer approach (MA) are proposed as fast and efficient numerical methods for the design of 1D photonic structures with rough interfaces. These methods are applicable for the structures, composed of materials with arbitrary permittivity tensor. MBC and MA are numerically validated on different types of interface roughness and permittivities of the constituent materials. The proposed methods can be combined with the 4x4 scattering matrix method as a field solver and an evolutionary strategy as an optimizer. The resulted optimization procedure is fast, accurate, numerically stable and can be used to design structures for various applications.

  12. Teaching Thermal Hydraulics & Numerical Methods: An Introductory Control Volume Primer

    SciTech Connect

    D. S. Lucas

    2004-10-01

    A graduate level course for Thermal Hydraulics (T/H) was taught through Idaho State University in the spring of 2004. A numerical approach was taken for the content of this course since the students were employed at the Idaho National Laboratory and had been users of T/H codes. The majority of the students had expressed an interest in learning about the Courant Limit, mass error, semi-implicit and implicit numerical integration schemes in the context of a computer code. Since no introductory text was found the author developed notes taught from his own research and courses taught for Westinghouse on the subject. The course started with a primer on control volume methods and the construction of a Homogeneous Equilibrium Model (HEM) (T/H) code. The primer was valuable for giving the students the basics behind such codes and their evolution to more complex codes for Thermal Hydraulics and Computational Fluid Dynamics (CFD). The course covered additional material including the Finite Element Method and non-equilibrium (T/H). The control volume primer and the construction of a three-equation (mass, momentum and energy) HEM code are the subject of this paper . The Fortran version of the code covered in this paper is elementary compared to its descendants. The steam tables used are less accurate than the available commercial version written in C Coupled to a Graphical User Interface (GUI). The Fortran version and input files can be downloaded at www.microfusionlab.com.

  13. Method for the numerical integration of equations of perturbed satellite motion in problems of space geodesy

    NASA Astrophysics Data System (ADS)

    Plakhov, Iu. V.; Mytsenko, A. V.; Shel'Pov, V. A.

    A numerical integration method is developed that is more accurate than Everhart's (1974) implicit single-sequence approach for integrating orbits. This method can be used to solve problems of space geodesy based on the use of highly precise laser observations.

  14. Accurate and efficient Nyström volume integral equation method for the Maxwell equations for multiple 3-D scatterers

    NASA Astrophysics Data System (ADS)

    Chen, Duan; Cai, Wei; Zinser, Brian; Cho, Min Hyung

    2016-09-01

    In this paper, we develop an accurate and efficient Nyström volume integral equation (VIE) method for the Maxwell equations for a large number of 3-D scatterers. The Cauchy Principal Values that arise from the VIE are computed accurately using a finite size exclusion volume together with explicit correction integrals consisting of removable singularities. Also, the hyper-singular integrals are computed using interpolated quadrature formulae with tensor-product quadrature nodes for cubes, spheres and cylinders, that are frequently encountered in the design of meta-materials. The resulting Nyström VIE method is shown to have high accuracy with a small number of collocation points and demonstrates p-convergence for computing the electromagnetic scattering of these objects. Numerical calculations of multiple scatterers of cubic, spherical, and cylindrical shapes validate the efficiency and accuracy of the proposed method.

  15. Accurate gradient approximation for complex interface problems in 3D by an improved coupling interface method

    NASA Astrophysics Data System (ADS)

    Shu, Yu-Chen; Chern, I.-Liang; Chang, Chien C.

    2014-10-01

    Most elliptic interface solvers become complicated for complex interface problems at those “exceptional points” where there are not enough neighboring interior points for high order interpolation. Such complication increases especially in three dimensions. Usually, the solvers are thus reduced to low order accuracy. In this paper, we classify these exceptional points and propose two recipes to maintain order of accuracy there, aiming at improving the previous coupling interface method [26]. Yet the idea is also applicable to other interface solvers. The main idea is to have at least first order approximations for second order derivatives at those exceptional points. Recipe 1 is to use the finite difference approximation for the second order derivatives at a nearby interior grid point, whenever this is possible. Recipe 2 is to flip domain signatures and introduce a ghost state so that a second-order method can be applied. This ghost state is a smooth extension of the solution at the exceptional point from the other side of the interface. The original state is recovered by a post-processing using nearby states and jump conditions. The choice of recipes is determined by a classification scheme of the exceptional points. The method renders the solution and its gradient uniformly second-order accurate in the entire computed domain. Numerical examples are provided to illustrate the second order accuracy of the presently proposed method in approximating the gradients of the original states for some complex interfaces which we had tested previous in two and three dimensions, and a real molecule (1D63) which is double-helix shape and composed of hundreds of atoms.

  16. Accurate gradient approximation for complex interface problems in 3D by an improved coupling interface method

    SciTech Connect

    Shu, Yu-Chen; Chern, I-Liang; Chang, Chien C.

    2014-10-15

    Most elliptic interface solvers become complicated for complex interface problems at those “exceptional points” where there are not enough neighboring interior points for high order interpolation. Such complication increases especially in three dimensions. Usually, the solvers are thus reduced to low order accuracy. In this paper, we classify these exceptional points and propose two recipes to maintain order of accuracy there, aiming at improving the previous coupling interface method [26]. Yet the idea is also applicable to other interface solvers. The main idea is to have at least first order approximations for second order derivatives at those exceptional points. Recipe 1 is to use the finite difference approximation for the second order derivatives at a nearby interior grid point, whenever this is possible. Recipe 2 is to flip domain signatures and introduce a ghost state so that a second-order method can be applied. This ghost state is a smooth extension of the solution at the exceptional point from the other side of the interface. The original state is recovered by a post-processing using nearby states and jump conditions. The choice of recipes is determined by a classification scheme of the exceptional points. The method renders the solution and its gradient uniformly second-order accurate in the entire computed domain. Numerical examples are provided to illustrate the second order accuracy of the presently proposed method in approximating the gradients of the original states for some complex interfaces which we had tested previous in two and three dimensions, and a real molecule ( (1D63)) which is double-helix shape and composed of hundreds of atoms.

  17. An accurate and efficient acoustic eigensolver based on a fast multipole BEM and a contour integral method

    NASA Astrophysics Data System (ADS)

    Zheng, Chang-Jun; Gao, Hai-Feng; Du, Lei; Chen, Hai-Bo; Zhang, Chuanzeng

    2016-01-01

    An accurate numerical solver is developed in this paper for eigenproblems governed by the Helmholtz equation and formulated through the boundary element method. A contour integral method is used to convert the nonlinear eigenproblem into an ordinary eigenproblem, so that eigenvalues can be extracted accurately by solving a set of standard boundary element systems of equations. In order to accelerate the solution procedure, the parameters affecting the accuracy and efficiency of the method are studied and two contour paths are compared. Moreover, a wideband fast multipole method is implemented with a block IDR (s) solver to reduce the overall solution cost of the boundary element systems of equations with multiple right-hand sides. The Burton-Miller formulation is employed to identify the fictitious eigenfrequencies of the interior acoustic problems with multiply connected domains. The actual effect of the Burton-Miller formulation on tackling the fictitious eigenfrequency problem is investigated and the optimal choice of the coupling parameter as α = i / k is confirmed through exterior sphere examples. Furthermore, the numerical eigenvalues obtained by the developed method are compared with the results obtained by the finite element method to show the accuracy and efficiency of the developed method.

  18. Numerical parameter constraints for accurate PIC-DSMC simulation of breakdown from arc initiation to stable arcs

    NASA Astrophysics Data System (ADS)

    Moore, Christopher; Hopkins, Matthew; Moore, Stan; Boerner, Jeremiah; Cartwright, Keith

    2015-09-01

    Simulation of breakdown is important for understanding and designing a variety of applications such as mitigating undesirable discharge events. Such simulations need to be accurate through early time arc initiation to late time stable arc behavior. Here we examine constraints on the timestep and mesh size required for arc simulations using the particle-in-cell (PIC) method with direct simulation Monte Carlo (DMSC) collisions. Accurate simulation of electron avalanche across a fixed voltage drop and constant neutral density (reduced field of 1000 Td) was found to require a timestep ~ 1/100 of the mean time between collisions and a mesh size ~ 1/25 the mean free path. These constraints are much smaller than the typical PIC-DSMC requirements for timestep and mesh size. Both constraints are related to the fact that charged particles are accelerated by the external field. Thus gradients in the electron energy distribution function can exist at scales smaller than the mean free path and these must be resolved by the mesh size for accurate collision rates. Additionally, the timestep must be small enough that the particle energy change due to the fields be small in order to capture gradients in the cross sections versus energy. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  19. Simple, fast and accurate eight points amplitude estimation method of sinusoidal signals for DSP based instrumentation

    NASA Astrophysics Data System (ADS)

    Vizireanu, D. N.; Halunga, S. V.

    2012-04-01

    A simple, fast and accurate amplitude estimation algorithm of sinusoidal signals for DSP based instrumentation is proposed. It is shown that eight samples, used in two steps, are sufficient. A practical analytical formula for amplitude estimation is obtained. Numerical results are presented. Simulations have been performed when the sampled signal is affected by white Gaussian noise and when the samples are quantized on a given number of bits.

  20. Numerical performance of projection methods in finite element consolidation models

    NASA Astrophysics Data System (ADS)

    Gambolati, Giuseppe; Pini, Giorgio; Ferronato, Massimiliano

    2001-12-01

    Projection, or conjugate gradient like, methods are becoming increasingly popular for the efficient solution of large sparse sets of unsymmetric indefinite equations arising from the numerical integration of (initial) boundary value problems. One such problem is soil consolidation coupling a flow and a structural model, typically solved by finite elements (FE) in space and a marching scheme in time (e.g. the Crank-Nicolson scheme). The attraction of a projection method stems from a number of factors, including the ease of implementation, the requirement of limited core memory and the low computational cost if a cheap and effective matrix preconditioner is available. In the present paper, biconjugate gradient stabilized (Bi- CGSTAB) is used to solve FE consolidation equations in 2-D and 3-D settings with variable time integration steps. Three different nodal orderings are selected along with the preconditioner ILUT based on incomplete triangular factorization and variable fill-in. The overall cost of the solver is made up of the preconditioning cost plus the cost to converge which is in turn related to the number of iterations and the elementary operations required by each iteration. The results show that nodal ordering affects the perfor mance of Bi-CGSTAB. For normally conditioned consolidation problems Bi-CGSTAB with the best ILUT preconditioner may converge in a number of iterations up to two order of magnitude smaller than the size of the FE model and proves an accurate, cost-effective and robust alternative to direct methods.

  1. A numerical method for solving systems of linear ordinary differential equations with rapidly oscillating solutions

    NASA Technical Reports Server (NTRS)

    Bernstein, Ira B.; Brookshaw, Leigh; Fox, Peter A.

    1992-01-01

    The present numerical method for accurate and efficient solution of systems of linear equations proceeds by numerically developing a set of basis solutions characterized by slowly varying dependent variables. The solutions thus obtained are shown to have a computational overhead largely independent of the small size of the scale length which characterizes the solutions; in many cases, the technique obviates series solutions near singular points, and its known sources of error can be easily controlled without a substantial increase in computational time.

  2. Fast, accurate and easy-to-pipeline methods for amplicon sequence processing

    NASA Astrophysics Data System (ADS)

    Antonielli, Livio; Sessitsch, Angela

    2016-04-01

    Next generation sequencing (NGS) technologies established since years as an essential resource in microbiology. While on the one hand metagenomic studies can benefit from the continuously increasing throughput of the Illumina (Solexa) technology, on the other hand the spreading of third generation sequencing technologies (PacBio, Oxford Nanopore) are getting whole genome sequencing beyond the assembly of fragmented draft genomes, making it now possible to finish bacterial genomes even without short read correction. Besides (meta)genomic analysis next-gen amplicon sequencing is still fundamental for microbial studies. Amplicon sequencing of the 16S rRNA gene and ITS (Internal Transcribed Spacer) remains a well-established widespread method for a multitude of different purposes concerning the identification and comparison of archaeal/bacterial (16S rRNA gene) and fungal (ITS) communities occurring in diverse environments. Numerous different pipelines have been developed in order to process NGS-derived amplicon sequences, among which Mothur, QIIME and USEARCH are the most well-known and cited ones. The entire process from initial raw sequence data through read error correction, paired-end read assembly, primer stripping, quality filtering, clustering, OTU taxonomic classification and BIOM table rarefaction as well as alternative "normalization" methods will be addressed. An effective and accurate strategy will be presented using the state-of-the-art bioinformatic tools and the example of a straightforward one-script pipeline for 16S rRNA gene or ITS MiSeq amplicon sequencing will be provided. Finally, instructions on how to automatically retrieve nucleotide sequences from NCBI and therefore apply the pipeline to targets other than 16S rRNA gene (Greengenes, SILVA) and ITS (UNITE) will be discussed.

  3. High-order accurate monotone difference schemes for solving gasdynamic problems by Godunov's method with antidiffusion

    NASA Astrophysics Data System (ADS)

    Moiseev, N. Ya.

    2011-04-01

    An approach to the construction of high-order accurate monotone difference schemes for solving gasdynamic problems by Godunov's method with antidiffusion is proposed. Godunov's theorem on monotone schemes is used to construct a new antidiffusion flux limiter in high-order accurate difference schemes as applied to linear advection equations with constant coefficients. The efficiency of the approach is demonstrated by solving linear advection equations with constant coefficients and one-dimensional gasdynamic equations.

  4. Finite element methods in numerical relativity.

    NASA Astrophysics Data System (ADS)

    Mann, P. J.

    The finite element method is very successful in Newtonian fluid simulations, and can be extended to relativitstic fluid flows. This paper describes the general method, and then outlines some preliminary results for spherically symmetric geometries. The mixed finite element - finite difference scheme is introduced, and used for the description of spherically symmetric collapse. Baker's (Newtonian) shock modelling method and Miller's moving finite element method are also mentioned. Collapse in double-null coordinates requires non-constant time slicing, so the full finite element method in space and time is described.

  5. Method for accurate growth of vertical-cavity surface-emitting lasers

    DOEpatents

    Chalmers, S.A.; Killeen, K.P.; Lear, K.L.

    1995-03-14

    The authors report a method for accurate growth of vertical-cavity surface-emitting lasers (VCSELs). The method uses a single reflectivity spectrum measurement to determine the structure of the partially completed VCSEL at a critical point of growth. This information, along with the extracted growth rates, allows imprecisions in growth parameters to be compensated for during growth of the remaining structure, which can then be completed with very accurate critical dimensions. Using this method, they can now routinely grow lasing VCSELs with Fabry-Perot cavity resonance wavelengths controlled to within 0.5%. 4 figs.

  6. Method for accurate growth of vertical-cavity surface-emitting lasers

    DOEpatents

    Chalmers, Scott A.; Killeen, Kevin P.; Lear, Kevin L.

    1995-01-01

    We report a method for accurate growth of vertical-cavity surface-emitting lasers (VCSELs). The method uses a single reflectivity spectrum measurement to determine the structure of the partially completed VCSEL at a critical point of growth. This information, along with the extracted growth rates, allows imprecisions in growth parameters to be compensated for during growth of the remaining structure, which can then be completed with very accurate critical dimensions. Using this method, we can now routinely grow lasing VCSELs with Fabry-Perot cavity resonance wavelengths controlled to within 0.5%.

  7. Numerical matrix method for quantum periodic potentials

    NASA Astrophysics Data System (ADS)

    Le Vot, Felipe; Meléndez, Juan J.; Yuste, Santos B.

    2016-06-01

    A numerical matrix methodology is applied to quantum problems with periodic potentials. The procedure consists essentially in replacing the true potential by an alternative one, restricted by an infinite square well, and in expressing the wave functions as finite superpositions of eigenfunctions of the infinite well. A matrix eigenvalue equation then yields the energy levels of the periodic potential within an acceptable accuracy. The methodology has been successfully used to deal with problems based on the well-known Kronig-Penney (KP) model. Besides the original model, these problems are a dimerized KP solid, a KP solid containing a surface, and a KP solid under an external field. A short list of additional problems that can be solved with this procedure is presented.

  8. Method for numerical simulations of metastable states

    SciTech Connect

    Heller, U.M.; Seiberg, N.

    1983-06-15

    We present a numerical simulation of metastable states near a first-order phase transition in the example of a U(1) lattice gauge theory with a generalized action. In order to make measurements in these states possible their decay has to be prevented. We achieve this by using a microcanonical simulation for a finite system. We then obtain the coupling constant (inverse temperature) as a function of the action density. It turns out to be nonmonotonic and hence not uniquely invertible. From it we derive the effective potential for the action density. This effective potential is not always convex, a property that seems to be in contradiction with the standard lore about its convexity. This apparent ''paradox'' is resolved in a discussion about different definitions of the effective potential.

  9. Interpolation Method Needed for Numerical Uncertainty

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.; Ilie, Marcel; Schallhorn, Paul A.

    2014-01-01

    Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors.

  10. Numerical methods in Markov chain modeling

    NASA Technical Reports Server (NTRS)

    Philippe, Bernard; Saad, Youcef; Stewart, William J.

    1989-01-01

    Several methods for computing stationary probability distributions of Markov chains are described and compared. The main linear algebra problem consists of computing an eigenvector of a sparse, usually nonsymmetric, matrix associated with a known eigenvalue. It can also be cast as a problem of solving a homogeneous singular linear system. Several methods based on combinations of Krylov subspace techniques are presented. The performance of these methods on some realistic problems are compared.

  11. Status and future prospects of using numerical methods to study complex flows at High Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Maccormack, R. W.

    1978-01-01

    The calculation of flow fields past aircraft configuration at flight Reynolds numbers is considered. Progress in devising accurate and efficient numerical methods, in understanding and modeling the physics of turbulence, and in developing reliable and powerful computer hardware is discussed. Emphasis is placed on efficient solutions to the Navier-Stokes equations.

  12. Numerical Methods for Two-Dimensional Stem Cell Tissue Growth.

    PubMed

    Ovadia, Jeremy; Nie, Qing

    2014-01-01

    Growth of developing and regenerative biological tissues of different cell types is usually driven by stem cells and their local environment. Here, we present a computational framework for continuum tissue growth models consisting of stem cells, cell lineages, and diffusive molecules that regulate proliferation and differentiation through feedback. To deal with the moving boundaries of the models in both open geometries and closed geometries (through polar coordinates) in two dimensions, we transform the dynamic domains and governing equations to fixed domains, followed by solving for the transformation functions to track the interface explicitly. Clustering grid points in local regions for better efficiency and accuracy can be achieved by appropriate choices of the transformation. The equations resulting from the incompressibility of the tissue is approximated by high-order finite difference schemes and is solved using the multigrid algorithms. The numerical tests demonstrate an overall spatiotemporal second-order accuracy of the methods and their capability in capturing large deformations of the tissue boundaries. The methods are applied to two biological systems: stratified epithelia for studying the effects of two different types of stem cell niches and the scaling of a morphogen gradient with the size of the Drosophila imaginal wing disc during growth. Direct simulations of both systems suggest that that the computational framework is robust and accurate, and it can incorporate various biological processes critical to stem cell dynamics and tissue growth. PMID:24415847

  13. Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy

    NASA Astrophysics Data System (ADS)

    Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui

    2014-06-01

    Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.

  14. A Time-Accurate Upwind Unstructured Finite Volume Method for Compressible Flow with Cure of Pathological Behaviors

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; Jorgenson, Philip C. E.

    2007-01-01

    A time-accurate, upwind, finite volume method for computing compressible flows on unstructured grids is presented. The method is second order accurate in space and time and yields high resolution in the presence of discontinuities. For efficiency, the Roe approximate Riemann solver with an entropy correction is employed. In the basic Euler/Navier-Stokes scheme, many concepts of high order upwind schemes are adopted: the surface flux integrals are carefully treated, a Cauchy-Kowalewski time-stepping scheme is used in the time-marching stage, and a multidimensional limiter is applied in the reconstruction stage. However even with these up-to-date improvements, the basic upwind scheme is still plagued by the so-called "pathological behaviors," e.g., the carbuncle phenomenon, the expansion shock, etc. A solution to these limitations is presented which uses a very simple dissipation model while still preserving second order accuracy. This scheme is referred to as the enhanced time-accurate upwind (ETAU) scheme in this paper. The unstructured grid capability renders flexibility for use in complex geometry; and the present ETAU Euler/Navier-Stokes scheme is capable of handling a broad spectrum of flow regimes from high supersonic to subsonic at very low Mach number, appropriate for both CFD (computational fluid dynamics) and CAA (computational aeroacoustics). Numerous examples are included to demonstrate the robustness of the methods.

  15. Modelling asteroid brightness variations. I - Numerical methods

    NASA Technical Reports Server (NTRS)

    Karttunen, H.

    1989-01-01

    A method for generating lightcurves of asteroid models is presented. The effects of the shape of the asteroid and the scattering law of a surface element are distinctly separable, being described by chosen functions that can easily be changed. The shape is specified by means of two functions that yield the length of the radius vector and the normal vector of the surface at a given point. The general shape must be convex, but spherical concavities producing macroscopic shadowing can also be modeled.

  16. An accurate method of extracting fat droplets in liver images for quantitative evaluation

    NASA Astrophysics Data System (ADS)

    Ishikawa, Masahiro; Kobayashi, Naoki; Komagata, Hideki; Shinoda, Kazuma; Yamaguchi, Masahiro; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie

    2015-03-01

    The steatosis in liver pathological tissue images is a promising indicator of nonalcoholic fatty liver disease (NAFLD) and the possible risk of hepatocellular carcinoma (HCC). The resulting values are also important for ensuring the automatic and accurate classification of HCC images, because the existence of many fat droplets is likely to create errors in quantifying the morphological features used in the process. In this study we propose a method that can automatically detect, and exclude regions with many fat droplets by using the feature values of colors, shapes and the arrangement of cell nuclei. We implement the method and confirm that it can accurately detect fat droplets and quantify the fat droplet ratio of actual images. This investigation also clarifies the effective characteristics that contribute to accurate detection.

  17. Numerical methods for determining interstitial oxygen in silicon

    SciTech Connect

    Stevenson, J.O.; Medernach, J.W.

    1995-01-01

    The interstitial oxygen (O{sub i}) concentration in Czochralski silicon and the subsequent SiO{sub x} precipitation are important parameters for integrated circuit fabrication. Uncontrolled SiO{sub x} precipitation during processing can create detrimental mechanical and electrical effects that contribute to poor performance. An inability to consistently and accurately measure the initial O{sub i} concentration in heavily doped silicon has led to contradictory results regarding the effects of dopant type and concentration on SiO{sub x} precipitation. The authors have developed a software package for reliably determining and comparing O{sub i} in heavily doped silicon. The SiFTIR{copyright} code implements three independent oxygen analysis methods in a single integrated package. Routine oxygen measurements are desirable over a wide range of silicon resistivities, but there has been confusion concerning which of the three numerical methods is most suitable for the low resistivity portion of the continuum. A major strength of the software is an ability to rapidly produce results for all three methods using only a single Fourier Transform Infrared Spectroscopy (FTIR) spectrum as input. This ability to perform three analyses on a single data set allows a detailed comparison of the three methods across the entire range of resistivities in question. Integrated circuit manufacturers could use the enabling technology provided by SiFTIR{copyright} to monitor O{sub i} content. Early detection of O{sub i} using this diagnostic could be beneficial in controlling SiO{sub x} precipitation during integrated circuit processing.

  18. A numerical method for power plant simulations

    SciTech Connect

    Carcasci, C.; Facchini, B.

    1996-03-01

    This paper describes a highly flexible computerized method of calculating operating data in a power cycle. The computerized method presented here permits the study of steam, gas and combined plants. Its flexibility is not restricted by any defined cycle scheme. A power plant consists of simple elements (turbine, compressor, combustor chamber, pump, etc.). Each power plant component is represented by its typical equations relating to fundamental mechanical and thermodynamic laws, so a power plant system is represented by algebraic equations, which are the typical equations of components, continuity equations, and data concerning plant conditions. This equation system is not linear, but can be reduced to a linear equation system with variable coefficients. The solution is simultaneous for each component and it is determined by an iterative process. An example of a simple gas turbine cycle demonstrates the applied technique. This paper also presents the user interface based on MS-Windows. The input data, the results, and any characteristic parameters of a complex cycle scheme are also shown.

  19. The U.S. Department of Agriculture Automated Multiple-Pass Method accurately assesses sodium intakes

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Accurate and practical methods to monitor sodium intake of the U.S. population are critical given current sodium reduction strategies. While the gold standard for estimating sodium intake is the 24 hour urine collection, few studies have used this biomarker to evaluate the accuracy of a dietary ins...

  20. Numerical methods for analyzing electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Lee, S. W.; Lo, Y. T.; Chuang, S. L.; Lee, C. S.

    1985-01-01

    Attenuation properties of the normal modes in an overmoded waveguide coated with a lossy material were analyzed. It is found that the low-order modes, can be significantly attenuated even with a thin layer of coating if the coating material is not too lossy. A thinner layer of coating is required for large attenuation of the low-order modes if the coating material is magnetic rather than dielectric. The Radar Cross Section (RCS) from an uncoated circular guide terminated by a perfect electric conductor was calculated and compared with available experimental data. It is confirmed that the interior irradiation contributes to the RCS. The equivalent-current method based on the geometrical theory of diffraction (GTD) was chosen for the calculation of the contribution from the rim diffraction. The RCS reduction from a coated circular guide terminated by a PEC are planned schemes for the experiments are included. The waveguide coated with a lossy magnetic material is suggested as a substitute for the corrugated waveguide.

  1. Accurate determination of specific heat at high temperatures using the flash diffusivity method

    NASA Technical Reports Server (NTRS)

    Vandersande, J. W.; Zoltan, A.; Wood, C.

    1989-01-01

    The flash diffusivity method of Parker et al. (1961) was used to measure accurately the specific heat of test samples simultaneously with thermal diffusivity, thus obtaining the thermal conductivity of these materials directly. The accuracy of data obtained on two types of materials (n-type silicon-germanium alloys and niobium), was + or - 3 percent. It is shown that the method is applicable up to at least 1300 K.

  2. An Effective Method to Accurately Calculate the Phase Space Factors for β - β - Decay

    DOE PAGESBeta

    Neacsu, Andrei; Horoi, Mihai

    2016-01-01

    Accurate calculations of the electron phase space factors are necessary for reliable predictions of double-beta decay rates and for the analysis of the associated electron angular and energy distributions. We present an effective method to calculate these phase space factors that takes into account the distorted Coulomb field of the daughter nucleus, yet it allows one to easily calculate the phase space factors with good accuracy relative to the most exact methods available in the recent literature.

  3. A second-order accurate kinetic-theory-based method for inviscid compressible flows

    NASA Technical Reports Server (NTRS)

    Deshpande, Suresh M.

    1986-01-01

    An upwind method for the numerical solution of the Euler equations is presented. This method, called the kinetic numerical method (KNM), is based on the fact that the Euler equations are moments of the Boltzmann equation of the kinetic theory of gases when the distribution function is Maxwellian. The KNM consists of two phases, the convection phase and the collision phase. The method is unconditionally stable and explicit. It is highly vectorizable and can be easily made total variation diminishing for the distribution function by a suitable choice of the interpolation strategy. The method is applied to a one-dimensional shock-propagation problem and to a two-dimensional shock-reflection problem.

  4. Stable and accurate hybrid finite volume methods based on pure convexity arguments for hyperbolic systems of conservation law

    NASA Astrophysics Data System (ADS)

    De Vuyst, Florian

    2004-01-01

    This exploratory work tries to present first results of a novel approach for the numerical approximation of solutions of hyperbolic systems of conservation laws. The objective is to define stable and "reasonably" accurate numerical schemes while being free from any upwind process and from any computation of derivatives or mean Jacobian matrices. That means that we only want to perform flux evaluations. This would be useful for "complicated" systems like those of two-phase models where solutions of Riemann problems are hard, see impossible to compute. For Riemann or Roe-like solvers, each fluid model needs the particular computation of the Jacobian matrix of the flux and the hyperbolicity property which can be conditional for some of these models makes the matrices be not R-diagonalizable everywhere in the admissible state space. In this paper, we rather propose some numerical schemes where the stability is obtained using convexity considerations. A certain rate of accuracy is also expected. For that, we propose to build numerical hybrid fluxes that are convex combinations of the second-order Lax-Wendroff scheme flux and the first-order modified Lax-Friedrichs scheme flux with an "optimal" combination rate that ensures both minimal numerical dissipation and good accuracy. The resulting scheme is a central scheme-like method. We will also need and propose a definition of local dissipation by convexity for hyperbolic or elliptic-hyperbolic systems. This convexity argument allows us to overcome the difficulty of nonexistence of classical entropy-flux pairs for certain systems. We emphasize the systematic feature of the method which can be fastly implemented or adapted to any kind of systems, with general analytical or data-tabulated equations of state. The numerical results presented in the paper are not superior to many existing state-of-the-art numerical methods for conservation laws such as ENO, MUSCL or central scheme of Tadmor and coworkers. The interest is rather

  5. Highly Accurate Beam Torsion Solutions Using the p-Version Finite Element Method

    NASA Technical Reports Server (NTRS)

    Smith, James P.

    1996-01-01

    A new treatment of the classical beam torsion boundary value problem is applied. Using the p-version finite element method with shape functions based on Legendre polynomials, torsion solutions for generic cross-sections comprised of isotropic materials are developed. Element shape functions for quadrilateral and triangular elements are discussed, and numerical examples are provided.

  6. A calibration-independent method for accurate complex permittivity determination of liquid materials

    SciTech Connect

    Hasar, U. C.

    2008-08-15

    This note presents a calibration-independent method for accurate complex permittivity determination of liquid materials. There are two main advantages of the proposed method over those in the literature, which require measurements of two cells with different lengths loaded by the same liquid material. First, it eliminates any inhomogeneity or impurity present in the second sample and decreases the uncertainty in sample thickness. Second, it removes the undesired impacts of measurement plane deterioration on measurements of liquid materials. For validation of the proposed method, we measure the complex permittivity of distilled water and compare its extracted permittivity with the theoretical datum obtained from the Debye equation.

  7. Formation of accurate 1-nm gaps using the electromigration method during metal deposition

    NASA Astrophysics Data System (ADS)

    Naitoh, Yasuhisa; Wei, Qingshuo; Mukaida, Masakazu; Ishida, Takao

    2016-03-01

    We investigate the origin of fabricated nanogap width variations using the electromigration method during metal deposition. This method also facilitates improved control over the nanogap width. A large suppression in the variation is achieved by sample annealing at 373 K during the application of bias voltages for electromigration, which indicates that the variation is caused by structural changes. This electromigration method during metal deposition for the fabrication of an accurate 1-nm gap electrode is useful for single-molecule-sized electronics. Furthermore, it opens the door for future research on integrated sub-1-nm-sized nanogap devices.

  8. A fast and accurate method to predict 2D and 3D aerodynamic boundary layer flows

    NASA Astrophysics Data System (ADS)

    Bijleveld, H. A.; Veldman, A. E. P.

    2014-12-01

    A quasi-simultaneous interaction method is applied to predict 2D and 3D aerodynamic flows. This method is suitable for offshore wind turbine design software as it is a very accurate and computationally reasonably cheap method. This study shows the results for a NACA 0012 airfoil. The two applied solvers converge to the experimental values when the grid is refined. We also show that in separation the eigenvalues remain positive thus avoiding the Goldstein singularity at separation. In 3D we show a flow over a dent in which separation occurs. A rotating flat plat is used to show the applicability of the method for rotating flows. The shown capabilities of the method indicate that the quasi-simultaneous interaction method is suitable for design methods for offshore wind turbine blades.

  9. A Novel Numerical Method for Fuzzy Boundary Value Problems

    NASA Astrophysics Data System (ADS)

    Can, E.; Bayrak, M. A.; Hicdurmaz

    2016-05-01

    In the present paper, a new numerical method is proposed for solving fuzzy differential equations which are utilized for the modeling problems in science and engineering. Fuzzy approach is selected due to its important applications on processing uncertainty or subjective information for mathematical models of physical problems. A second-order fuzzy linear boundary value problem is considered in particular due to its important applications in physics. Moreover, numerical experiments are presented to show the effectiveness of the proposed numerical method on specific physical problems such as heat conduction in an infinite plate and a fin.

  10. Introducing GAMER: A fast and accurate method for ray-tracing galaxies using procedural noise

    SciTech Connect

    Groeneboom, N. E.; Dahle, H.

    2014-03-10

    We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images that can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.

  11. Accurate determination of relative metatarsal protrusion with a small intermetatarsal angle: a novel simplified method.

    PubMed

    Osher, Lawrence; Blazer, Marie Mantini; Buck, Stacie; Biernacki, Tomasz

    2014-01-01

    Several published studies have explained in detail how to measure relative metatarsal protrusion on the plain film anteroposterior pedal radiograph. These studies have demonstrated the utility of relative metatarsal protrusion measurement in that it correlates with distal forefoot deformity or pathologic features. The method currently preferred by practitioners in podiatric medicine and surgery often presents one with the daunting challenge of obtaining an accurate measurement when the intermetatarsal 1-2 angle is small. The present study illustrates a novel mathematical solution to this problem that is simple to master, relatively quick to perform, and yields accurate results. Our method was tested and proven by 4 trained observers with varying degrees of clinical skill who independently measured the same 10 radiographs. PMID:24933656

  12. An accurate and practical method for inference of weak gravitational lensing from galaxy images

    NASA Astrophysics Data System (ADS)

    Bernstein, Gary M.; Armstrong, Robert; Krawiec, Christina; March, Marisa C.

    2016-07-01

    We demonstrate highly accurate recovery of weak gravitational lensing shear using an implementation of the Bayesian Fourier Domain (BFD) method proposed by Bernstein & Armstrong, extended to correct for selection biases. The BFD formalism is rigorously correct for Nyquist-sampled, background-limited, uncrowded images of background galaxies. BFD does not assign shapes to galaxies, instead compressing the pixel data D into a vector of moments M, such that we have an analytic expression for the probability P(M|g) of obtaining the observations with gravitational lensing distortion g along the line of sight. We implement an algorithm for conducting BFD's integrations over the population of unlensed source galaxies which measures ≈10 galaxies s-1 core-1 with good scaling properties. Initial tests of this code on ≈109 simulated lensed galaxy images recover the simulated shear to a fractional accuracy of m = (2.1 ± 0.4) × 10-3, substantially more accurate than has been demonstrated previously for any generally applicable method. Deep sky exposures generate a sufficiently accurate approximation to the noiseless, unlensed galaxy population distribution assumed as input to BFD. Potential extensions of the method include simultaneous measurement of magnification and shear; multiple-exposure, multiband observations; and joint inference of photometric redshifts and lensing tomography.

  13. An accurate and practical method for inference of weak gravitational lensing from galaxy images

    NASA Astrophysics Data System (ADS)

    Bernstein, Gary M.; Armstrong, Robert; Krawiec, Christina; March, Marisa C.

    2016-04-01

    We demonstrate highly accurate recovery of weak gravitational lensing shear using an implementation of the Bayesian Fourier Domain (BFD) method proposed by Bernstein & Armstrong (2014, BA14), extended to correct for selection biases. The BFD formalism is rigorously correct for Nyquist-sampled, background-limited, uncrowded image of background galaxies. BFD does not assign shapes to galaxies, instead compressing the pixel data D into a vector of moments M, such that we have an analytic expression for the probability P(M|g) of obtaining the observations with gravitational lensing distortion g along the line of sight. We implement an algorithm for conducting BFD's integrations over the population of unlensed source galaxies which measures ≈10 galaxies/second/core with good scaling properties. Initial tests of this code on ≈109 simulated lensed galaxy images recover the simulated shear to a fractional accuracy of m = (2.1 ± 0.4) × 10-3, substantially more accurate than has been demonstrated previously for any generally applicable method. Deep sky exposures generate a sufficiently accurate approximation to the noiseless, unlensed galaxy population distribution assumed as input to BFD. Potential extensions of the method include simultaneous measurement of magnification and shear; multiple-exposure, multi-band observations; and joint inference of photometric redshifts and lensing tomography.

  14. Compensation method for obtaining accurate, sub-micrometer displacement measurements of immersed specimens using electronic speckle interferometry

    PubMed Central

    Fazio, Massimo A.; Bruno, Luigi; Reynaud, Juan F.; Poggialini, Andrea; Downs, J. Crawford

    2012-01-01

    We proposed and validated a compensation method that accounts for the optical distortion inherent in measuring displacements on specimens immersed in aqueous solution. A spherically-shaped rubber specimen was mounted and pressurized on a custom apparatus, with the resulting surface displacements recorded using electronic speckle pattern interferometry (ESPI). Point-to-point light direction computation is achieved by a ray-tracing strategy coupled with customized B-spline-based analytical representation of the specimen shape. The compensation method reduced the mean magnitude of the displacement error induced by the optical distortion from 35% to 3%, and ESPI displacement measurement repeatability showed a mean variance of 16 nm at the 95% confidence level for immersed specimens. The ESPI interferometer and numerical data analysis procedure presented herein provide reliable, accurate, and repeatable measurement of sub-micrometer deformations obtained from pressurization tests of spherically-shaped specimens immersed in aqueous salt solution. This method can be used to quantify small deformations in biological tissue samples under load, while maintaining the hydration necessary to ensure accurate material property assessment. PMID:22435090

  15. Asymptotic-induced numerical methods for conservation laws

    NASA Technical Reports Server (NTRS)

    Garbey, Marc; Scroggs, Jeffrey S.

    1990-01-01

    Asymptotic-induced methods are presented for the numerical solution of hyperbolic conservation laws with or without viscosity. The methods consist of multiple stages. The first stage is to obtain a first approximation by using a first-order method, such as the Godunov scheme. Subsequent stages of the method involve solving internal-layer problems identified by using techniques derived via asymptotics. Finally, a residual correction increases the accuracy of the scheme. The method is derived and justified with singular perturbation techniques.

  16. Advanced numerical methods for three dimensional two-phase flow calculations

    SciTech Connect

    Toumi, I.; Caruge, D.

    1997-07-01

    This paper is devoted to new numerical methods developed for both one and three dimensional two-phase flow calculations. These methods are finite volume numerical methods and are based on the use of Approximate Riemann Solvers concepts to define convective fluxes versus mean cell quantities. The first part of the paper presents the numerical method for a one dimensional hyperbolic two-fluid model including differential terms as added mass and interface pressure. This numerical solution scheme makes use of the Riemann problem solution to define backward and forward differencing to approximate spatial derivatives. The construction of this approximate Riemann solver uses an extension of Roe`s method that has been successfully used to solve gas dynamic equations. As far as the two-fluid model is hyperbolic, this numerical method seems very efficient for the numerical solution of two-phase flow problems. The scheme was applied both to shock tube problems and to standard tests for two-fluid computer codes. The second part describes the numerical method in the three dimensional case. The authors discuss also some improvements performed to obtain a fully implicit solution method that provides fast running steady state calculations. Such a scheme is not implemented in a thermal-hydraulic computer code devoted to 3-D steady-state and transient computations. Some results obtained for Pressurised Water Reactors concerning upper plenum calculations and a steady state flow in the core with rod bow effect evaluation are presented. In practice these new numerical methods have proved to be stable on non staggered grids and capable of generating accurate non oscillating solutions for two-phase flow calculations.

  17. Comparison of methods for accurate end-point detection of potentiometric titrations

    NASA Astrophysics Data System (ADS)

    Villela, R. L. A.; Borges, P. P.; Vyskočil, L.

    2015-01-01

    Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper.

  18. A method to accurately estimate the muscular torques of human wearing exoskeletons by torque sensors.

    PubMed

    Hwang, Beomsoo; Jeon, Doyoung

    2015-01-01

    In exoskeletal robots, the quantification of the user's muscular effort is important to recognize the user's motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users' muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user's limb accurately from the measured torque. The user's limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user's muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions. PMID:25860074

  19. A Novel Method for Accurate Operon Predictions in All SequencedProkaryotes

    SciTech Connect

    Price, Morgan N.; Huang, Katherine H.; Alm, Eric J.; Arkin, Adam P.

    2004-12-01

    We combine comparative genomic measures and the distance separating adjacent genes to predict operons in 124 completely sequenced prokaryotic genomes. Our method automatically tailors itself to each genome using sequence information alone, and thus can be applied to any prokaryote. For Escherichia coli K12 and Bacillus subtilis, our method is 85 and 83% accurate, respectively, which is similar to the accuracy of methods that use the same features but are trained on experimentally characterized transcripts. In Halobacterium NRC-1 and in Helicobacterpylori, our method correctly infers that genes in operons are separated by shorter distances than they are in E.coli, and its predictions using distance alone are more accurate than distance-only predictions trained on a database of E.coli transcripts. We use microarray data from sixphylogenetically diverse prokaryotes to show that combining intergenic distance with comparative genomic measures further improves accuracy and that our method is broadly effective. Finally, we survey operon structure across 124 genomes, and find several surprises: H.pylori has many operons, contrary to previous reports; Bacillus anthracis has an unusual number of pseudogenes within conserved operons; and Synechocystis PCC6803 has many operons even though it has unusually wide spacings between conserved adjacent genes.

  20. Accurate Time/Frequency Transfer Method Using Bi-Directional WDM Transmission

    NASA Technical Reports Server (NTRS)

    Imaoka, Atsushi; Kihara, Masami

    1996-01-01

    An accurate time transfer method is proposed using b-directional wavelength division multiplexing (WDM) signal transmission along a single optical fiber. This method will be used in digital telecommunication networks and yield a time synchronization accuracy of better than 1 ns for long transmission lines over several tens of kilometers. The method can accurately measure the difference in delay between two wavelength signals caused by the chromatic dispersion of the fiber in conventional simple bi-directional dual-wavelength frequency transfer methods. We describe the characteristics of this difference in delay and then show that the accuracy of the delay measurements can be obtained below 0.1 ns by transmitting 156 Mb/s times reference signals of 1.31 micrometer and 1.55 micrometers along a 50 km fiber using the proposed method. The sub-nanosecond delay measurement using the simple bi-directional dual-wavelength transmission along a 100 km fiber with a wavelength spacing of 1 nm in the 1.55 micrometer range is also shown.

  1. Parallel processing numerical method for confined vortex dynamics and applications

    NASA Astrophysics Data System (ADS)

    Bistrian, Diana Alina

    2013-10-01

    This paper explores a combined analytical and numerical technique to investigate the hydrodynamic instability of confined swirling flows, with application to vortex rope dynamics in a Francis turbine diffuser, in condition of sophisticated boundary constraints. We present a new approach based on the method of orthogonal decomposition in the Hilbert space, implemented with a spectral descriptor scheme in discrete space. A parallel implementation of the numerical scheme is conducted reducing the computational time compared to other techniques.

  2. Collocation Method for Numerical Solution of Coupled Nonlinear Schroedinger Equation

    SciTech Connect

    Ismail, M. S.

    2010-09-30

    The coupled nonlinear Schroedinger equation models several interesting physical phenomena presents a model equation for optical fiber with linear birefringence. In this paper we use collocation method to solve this equation, we test this method for stability and accuracy. Numerical tests using single soliton and interaction of three solitons are used to test the resulting scheme.

  3. Investigating Convergence Patterns for Numerical Methods Using Data Analysis

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2013-01-01

    The article investigates the patterns that arise in the convergence of numerical methods, particularly those in the errors involved in successive iterations, using data analysis and curve fitting methods. In particular, the results obtained are used to convey a deeper level of understanding of the concepts of linear, quadratic, and cubic…

  4. A numerical method for solving singular De`s

    SciTech Connect

    Mahaver, W.T.

    1996-12-31

    A numerical method is developed for solving singular differential equations using steepest descent based on weighted Sobolev gradients. The method is demonstrated on a variety of first and second order problems, including linear constrained, unconstrained, and partially constrained first order problems, a nonlinear first order problem with irregular singularity, and two second order variational problems.

  5. Accurate Wind Characterization in Complex Terrain Using the Immersed Boundary Method

    SciTech Connect

    Lundquist, K A; Chow, F K; Lundquist, J K; Kosovic, B

    2009-09-30

    This paper describes an immersed boundary method (IBM) that facilitates the explicit resolution of complex terrain within the Weather Research and Forecasting (WRF) model. Two different interpolation methods, trilinear and inverse distance weighting, are used at the core of the IBM algorithm. Functional aspects of the algorithm's implementation and the accuracy of results are considered. Simulations of flow over a three-dimensional hill with shallow terrain slopes are preformed with both WRF's native terrain-following coordinate and with both IB methods. Comparisons of flow fields from the three simulations show excellent agreement, indicating that both IB methods produce accurate results. However, when ease of implementation is considered, inverse distance weighting is superior. Furthermore, inverse distance weighting is shown to be more adept at handling highly complex urban terrain, where the trilinear interpolation algorithm breaks down. This capability is demonstrated by using the inverse distance weighting core of the IBM to model atmospheric flow in downtown Oklahoma City.

  6. Accurate force fields and methods for modelling organic molecular crystals at finite temperatures.

    PubMed

    Nyman, Jonas; Pundyke, Orla Sheehan; Day, Graeme M

    2016-06-21

    We present an assessment of the performance of several force fields for modelling intermolecular interactions in organic molecular crystals using the X23 benchmark set. The performance of the force fields is compared to several popular dispersion corrected density functional methods. In addition, we present our implementation of lattice vibrational free energy calculations in the quasi-harmonic approximation, using several methods to account for phonon dispersion. This allows us to also benchmark the force fields' reproduction of finite temperature crystal structures. The results demonstrate that anisotropic atom-atom multipole-based force fields can be as accurate as several popular DFT-D methods, but have errors 2-3 times larger than the current best DFT-D methods. The largest error in the examined force fields is a systematic underestimation of the (absolute) lattice energy. PMID:27230942

  7. Numerical solution of optimal control problems using multiple-interval integral Gegenbauer pseudospectral methods

    NASA Astrophysics Data System (ADS)

    Tang, Xiaojun

    2016-04-01

    The main purpose of this work is to provide multiple-interval integral Gegenbauer pseudospectral methods for solving optimal control problems. The latest developed single-interval integral Gauss/(flipped Radau) pseudospectral methods can be viewed as special cases of the proposed methods. We present an exact and efficient approach to compute the mesh pseudospectral integration matrices for the Gegenbauer-Gauss and flipped Gegenbauer-Gauss-Radau points. Numerical results on benchmark optimal control problems confirm the ability of the proposed methods to obtain highly accurate solutions.

  8. Compression-based distance (CBD): a simple, rapid, and accurate method for microbiota composition comparison

    PubMed Central

    2013-01-01

    Background Perturbations in intestinal microbiota composition have been associated with a variety of gastrointestinal tract-related diseases. The alleviation of symptoms has been achieved using treatments that alter the gastrointestinal tract microbiota toward that of healthy individuals. Identifying differences in microbiota composition through the use of 16S rRNA gene hypervariable tag sequencing has profound health implications. Current computational methods for comparing microbial communities are usually based on multiple alignments and phylogenetic inference, making them time consuming and requiring exceptional expertise and computational resources. As sequencing data rapidly grows in size, simpler analysis methods are needed to meet the growing computational burdens of microbiota comparisons. Thus, we have developed a simple, rapid, and accurate method, independent of multiple alignments and phylogenetic inference, to support microbiota comparisons. Results We create a metric, called compression-based distance (CBD) for quantifying the degree of similarity between microbial communities. CBD uses the repetitive nature of hypervariable tag datasets and well-established compression algorithms to approximate the total information shared between two datasets. Three published microbiota datasets were used as test cases for CBD as an applicable tool. Our study revealed that CBD recaptured 100% of the statistically significant conclusions reported in the previous studies, while achieving a decrease in computational time required when compared to similar tools without expert user intervention. Conclusion CBD provides a simple, rapid, and accurate method for assessing distances between gastrointestinal tract microbiota 16S hypervariable tag datasets. PMID:23617892

  9. Accurate prediction of protein–protein interactions from sequence alignments using a Bayesian method

    PubMed Central

    Burger, Lukas; van Nimwegen, Erik

    2008-01-01

    Accurate and large-scale prediction of protein–protein interactions directly from amino-acid sequences is one of the great challenges in computational biology. Here we present a new Bayesian network method that predicts interaction partners using only multiple alignments of amino-acid sequences of interacting protein domains, without tunable parameters, and without the need for any training examples. We first apply the method to bacterial two-component systems and comprehensively reconstruct two-component signaling networks across all sequenced bacteria. Comparisons of our predictions with known interactions show that our method infers interaction partners genome-wide with high accuracy. To demonstrate the general applicability of our method we show that it also accurately predicts interaction partners in a recent dataset of polyketide synthases. Analysis of the predicted genome-wide two-component signaling networks shows that cognates (interacting kinase/regulator pairs, which lie adjacent on the genome) and orphans (which lie isolated) form two relatively independent components of the signaling network in each genome. In addition, while most genes are predicted to have only a small number of interaction partners, we find that 10% of orphans form a separate class of ‘hub' nodes that distribute and integrate signals to and from up to tens of different interaction partners. PMID:18277381

  10. A new numerical method of total solar eclipse photography processing

    NASA Astrophysics Data System (ADS)

    Druckmüller, M.; Rušin, V.; Minarovjech, M.

    2006-10-01

    A new numerical method of image processing suitable for visualization of corona images taken during total solar eclipses is presented. This method allows us to study both small- and large-scale coronal structures that remain invisible on original images because of their very high dynamic range of the coronal brightness. The method is based on the use of adaptive filters inspired by human vision and the sensitivity of resulting images is thus very close to that of the human eye during an eclipse. A high precision alignment method for white-light corona images is also discussed. The proposed method highly improves a widely used unsharp masking method employing a radially blurred mask. The results of these numerical image processing techniques are illustrated by a series of images taken during eclipses of the last decade. The method minimizes the risk of processing artifacts.

  11. 25 Years of Self-organized Criticality: Numerical Detection Methods

    NASA Astrophysics Data System (ADS)

    McAteer, R. T. James; Aschwanden, Markus J.; Dimitropoulou, Michaila; Georgoulis, Manolis K.; Pruessner, Gunnar; Morales, Laura; Ireland, Jack; Abramenko, Valentyna

    2016-01-01

    The detection and characterization of self-organized criticality (SOC), in both real and simulated data, has undergone many significant revisions over the past 25 years. The explosive advances in the many numerical methods available for detecting, discriminating, and ultimately testing, SOC have played a critical role in developing our understanding of how systems experience and exhibit SOC. In this article, methods of detecting SOC are reviewed; from correlations to complexity to critical quantities. A description of the basic autocorrelation method leads into a detailed analysis of application-oriented methods developed in the last 25 years. In the second half of this manuscript space-based, time-based and spatial-temporal methods are reviewed and the prevalence of power laws in nature is described, with an emphasis on event detection and characterization. The search for numerical methods to clearly and unambiguously detect SOC in data often leads us outside the comfort zone of our own disciplines—the answers to these questions are often obtained by studying the advances made in other fields of study. In addition, numerical detection methods often provide the optimum link between simulations and experiments in scientific research. We seek to explore this boundary where the rubber meets the road, to review this expanding field of research of numerical detection of SOC systems over the past 25 years, and to iterate forwards so as to provide some foresight and guidance into developing breakthroughs in this subject over the next quarter of a century.

  12. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    PubMed

    Saccà, Alessandro

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667

  13. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms

    PubMed Central

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes’ principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of ‘unellipticity’ introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667

  14. Direct Coupling Method for Time-Accurate Solution of Incompressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Soh, Woo Y.

    1992-01-01

    A noniterative finite difference numerical method is presented for the solution of the incompressible Navier-Stokes equations with second order accuracy in time and space. Explicit treatment of convection and diffusion terms and implicit treatment of the pressure gradient give a single pressure Poisson equation when the discretized momentum and continuity equations are combined. A pressure boundary condition is not needed on solid boundaries in the staggered mesh system. The solution of the pressure Poisson equation is obtained directly by Gaussian elimination. This method is tested on flow problems in a driven cavity and a curved duct.

  15. Numerical Methods in Quantum Mechanics: Analysis of Numerical Schemes on One-Dimensional Schrodinger Wave Problems

    NASA Astrophysics Data System (ADS)

    Jones, Marvin Quenten, Jr.

    The motion and behavior of quantum processes can be described by the Schrodinger equation using the wave function, Psi(x,t). The use of the Schrodinger equation to study quantum phenomena is known as Quantum Mechanics, akin to classical mechanics being the tool to study classical physics. This research focuses on the emphasis of numerical techniques: Finite-Difference, Fast Fourier Transform (spectral method), finite difference schemes such as the Leapfrog method and the Crank-Nicolson scheme and second quantization to solve and analyze the Schrodinger equation for the infinite square well problem, the free particle with periodic boundary conditions, the barrier problem, tight-binding hamiltonians and a potential wall problem. We discuss these techniques and the problems created to test how these different techniques draw both physical and numerical conclusions in a tabular summary. We observed both numerical stability and quantum stability (conservation of energy, probability, momentum, etc.). We found in our results that the Crank-Nicolson scheme is an unconditionally stable scheme and conserves probability (unitary), and momentum, though dissipative with energy. The time-independent problems conserved energy, momentum and were unitary, which is of interest, but we found when time-dependence was introduced, quantum stability (i.e. conservation of mass, momentum, etc.) was not implied by numerical stability. Hence, we observed schemes that were numerically stable, but not quantum stable as well as schemes that were quantum stable, but not numerically stable for all of time, t. We also observed that second quantization removed the issues with stability as the problem was transformed into a discrete problem. Moreover, all quantum information is conserved in second quantization. This method, however, does not work universally for all problems.

  16. A simple and accurate resist parameter extraction method for sub-80-nm DRAM patterns

    NASA Astrophysics Data System (ADS)

    Lee, Sook; Hwang, Chan; Park, Dong-Woon; Kim, In-Sung; Kim, Ho-Chul; Woo, Sang-Gyun; Cho, Han-Ku; Moon, Joo-Tae

    2004-05-01

    Due to the polarization effect of high NA lithography, the consideration of resist effect in lithography simulation becomes increasingly important. In spite of the importance of resist simulation, many process engineers are reluctant to consider resist effect in lithography simulation due to time-consuming procedure to extract required resist parameters and the uncertainty of measurement of some parameters. Weiss suggested simplified development model, and this model does not require the complex kinetic parameters. For the device fabrication engineers, there is a simple and accurate parameter extraction and optimizing method using Weiss model. This method needs refractive index, Dill"s parameters and development rate monitoring (DRM) data in parameter extraction. The parameters extracted using referred sequence is not accurate, so that we have to optimize the parameters to fit the critical dimension scanning electron microscopy (CD SEM) data of line and space patterns. Hence, the FiRM of Sigma-C is utilized as a resist parameter-optimizing program. According to our study, the illumination shape, the aberration and the pupil mesh point have a large effect on the accuracy of resist parameter in optimization. To obtain the optimum parameters, we need to find the saturated mesh points in terms of normalized intensity log slope (NILS) prior to an optimization. The simulation results using the optimized parameters by this method shows good agreement with experiments for iso-dense bias, Focus-Exposure Matrix data and sub 80nm device pattern simulation.

  17. Induced Dual-Nanospray: A Novel Internal Calibration Method for Convenient and Accurate Mass Measurement

    NASA Astrophysics Data System (ADS)

    Li, Yafeng; Zhang, Ning; Zhou, Yueming; Wang, Jianing; Zhang, Yiming; Wang, Jiyun; Xiong, Caiqiao; Chen, Suming; Nie, Zongxiu

    2013-09-01

    Accurate mass information is of great importance in the determination of unknown compounds. An effective and easy-to-control internal mass calibration method will dramatically benefit accurate mass measurement. Here we reported a simple induced dual-nanospray internal calibration device which has the following three advantages: (1) the two sprayers are in the same alternating current field; thus both reference ions and sample ions can be simultaneously generated and recorded. (2) It is very simple and can be easily assembled. Just two metal tubes, two nanosprayers, and an alternating current power supply are included. (3) With the low-flow-rate character and the versatility of nanoESI, this calibration method is capable of calibrating various samples, even untreated complex samples such as urine and other biological samples with small sample volumes. The calibration errors are around 1 ppm in positive ion mode and 3 ppm in negative ion mode with good repeatability. This new internal calibration method opens up new possibilities in the determination of unknown compounds, and it has great potential for the broad applications in biological and chemical analysis.

  18. A fast GNU method to draw accurate scientific illustrations for taxonomy.

    PubMed

    Montesanto, Giuseppe

    2015-01-01

    Nowadays only digital figures are accepted by the most important journals of taxonomy. These may be produced by scanning conventional drawings, made with high precision technical ink-pens, which normally use capillary cartridge and various line widths. Digital drawing techniques that use vector graphics, have already been described in literature to support scientists in drawing figures and plates for scientific illustrations; these techniques use many different software and hardware devices. The present work gives step-by-step instructions on how to make accurate line drawings with a new procedure that uses bitmap graphics with the GNU Image Manipulation Program (GIMP). This method is noteworthy: it is very accurate, producing detailed lines at the highest resolution; the raster lines appear as realistic ink-made drawings; it is faster than the traditional way of making illustrations; everyone can use this simple technique; this method is completely free as it does not use expensive and licensed software and it can be used with different operating systems. The method has been developed drawing figures of terrestrial isopods and some examples are here given. PMID:26261449

  19. A fast GNU method to draw accurate scientific illustrations for taxonomy

    PubMed Central

    Montesanto, Giuseppe

    2015-01-01

    Abstract Nowadays only digital figures are accepted by the most important journals of taxonomy. These may be produced by scanning conventional drawings, made with high precision technical ink-pens, which normally use capillary cartridge and various line widths. Digital drawing techniques that use vector graphics, have already been described in literature to support scientists in drawing figures and plates for scientific illustrations; these techniques use many different software and hardware devices. The present work gives step-by-step instructions on how to make accurate line drawings with a new procedure that uses bitmap graphics with the GNU Image Manipulation Program (GIMP). This method is noteworthy: it is very accurate, producing detailed lines at the highest resolution; the raster lines appear as realistic ink-made drawings; it is faster than the traditional way of making illustrations; everyone can use this simple technique; this method is completely free as it does not use expensive and licensed software and it can be used with different operating systems. The method has been developed drawing figures of terrestrial isopods and some examples are here given. PMID:26261449

  20. A new cation-exchange method for accurate field speciation of hexavalent chromium

    USGS Publications Warehouse

    Ball, J.W.; McCleskey, R.B.

    2003-01-01

    A new method for field speciation of Cr(VI) has been developed to meet present stringent regulatory standards and to overcome the limitations of existing methods. The method consists of passing a water sample through strong acid cation-exchange resin at the field site, where Cr(III) is retained while Cr(VI) passes into the effluent and is preserved for later determination. The method is simple, rapid, portable, and accurate, and makes use of readily available, inexpensive materials. Cr(VI) concentrations are determined later in the laboratory using any elemental analysis instrument sufficiently sensitive to measure the Cr(VI) concentrations of interest. The new method allows measurement of Cr(VI) concentrations as low as 0.05 ??g 1-1, storage of samples for at least several weeks prior to analysis, and use of readily available analytical instrumentation. Cr(VI) can be separated from Cr(III) between pH 2 and 11 at Cr(III)/Cr(VI) concentration ratios as high as 1000. The new method has demonstrated excellent comparability with two commonly used methods, the Hach Company direct colorimetric method and USEPA method 218.6. The new method is superior to the Hach direct colorimetric method owing to its relative sensitivity and simplicity. The new method is superior to USEPA method 218.6 in the presence of Fe(II) concentrations up to 1 mg 1-1 and Fe(III) concentrations up to 10 mg 1-1. Time stability of preserved samples is a significant advantage over the 24-h time constraint specified for USEPA method 218.6.

  1. Nebulizer calibration using lithium chloride: an accurate, reproducible and user-friendly method.

    PubMed

    Ward, R J; Reid, D W; Leonard, R F; Johns, D P; Walters, E H

    1998-04-01

    Conventional gravimetric (weight loss) calibration of jet nebulizers overestimates their aerosol output by up to 80% due to unaccounted evaporative loss. We examined two methods of measuring true aerosol output from jet nebulizers. A new adaptation of a widely available clinical assay for lithium (determined by flame photometry, LiCl method) was compared to an existing electrochemical method based on fluoride detection (NaF method). The agreement between the two methods and the repeatability of each method were examined. Ten Mefar jet nebulizers were studied using a Mefar MK3 inhalation dosimeter. There was no significant difference between the two methods (p=0.76) with mean aerosol output of the 10 nebulizers being 7.40 mg x s(-1) (SD 1.06; range 5.86-9.36 mg x s(-1)) for the NaF method and 7.27 mg x s(-1) (SD 0.82; range 5.52-8.26 mg x s(-1)) for the LiCl method. The LiCl method had a coefficient of repeatability of 13 mg x s(-1) compared with 3.7 mg x s(-1) for the NaF method. The LiCl method accurately measured true aerosol output and was considerably easier to use. It was also more repeatable, and hence more precise, than the NaF method. Because the LiCl method uses an assay that is routinely available from hospital biochemistry laboratories, it is easy to use and, thus, can readily be adopted by busy respiratory function departments. PMID:9623700

  2. An improved method to accurately calibrate the gantry angle indicators of the radiotherapy linear accelerators

    NASA Astrophysics Data System (ADS)

    Chang, Liyun; Ho, Sheng-Yow; Du, Yi-Chun; Lin, Chih-Ming; Chen, Tainsong

    2007-06-01

    The calibration of the gantry angle indicator is an important and basic quality assurance (QA) item for the radiotherapy linear accelerator. In this study, we propose a new and practical method, which uses only the digital level, V-film, and general solid phantoms. By taking the star shot only, we can accurately calculate the true gantry angle according to the geometry of the film setup. The results on our machine showed that the gantry angle was shifted by -0.11° compared with the digital indicator, and the standard deviation was within 0.05°. This method can also be used for the simulator. In conclusion, this proposed method could be adopted as an annual QA item for mechanical QA of the accelerator.

  3. Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method.

    PubMed

    Zhao, Yan; Cao, Liangcai; Zhang, Hao; Kong, Dezhao; Jin, Guofan

    2015-10-01

    Fast calculation and correct depth cue are crucial issues in the calculation of computer-generated hologram (CGH) for high quality three-dimensional (3-D) display. An angular-spectrum based algorithm for layer-oriented CGH is proposed. Angular spectra from each layer are synthesized as a layer-corresponded sub-hologram based on the fast Fourier transform without paraxial approximation. The proposed method can avoid the huge computational cost of the point-oriented method and yield accurate predictions of the whole diffracted field compared with other layer-oriented methods. CGHs of versatile formats of 3-D digital scenes, including computed tomography and 3-D digital models, are demonstrated with precise depth performance and advanced image quality. PMID:26480062

  4. Quick and accurate estimation of the elastic constants using the minimum image method

    NASA Astrophysics Data System (ADS)

    Tretiakov, Konstantin V.; Wojciechowski, Krzysztof W.

    2015-04-01

    A method for determining the elastic properties using the minimum image method (MIM) is proposed and tested on a model system of particles interacting by the Lennard-Jones (LJ) potential. The elastic constants of the LJ system are determined in the thermodynamic limit, N → ∞, using the Monte Carlo (MC) method in the NVT and NPT ensembles. The simulation results show that when determining the elastic constants, the contribution of long-range interactions cannot be ignored, because that would lead to erroneous results. In addition, the simulations have revealed that the inclusion of further interactions of each particle with all its minimum image neighbors even in case of small systems leads to results which are very close to the values of elastic constants in the thermodynamic limit. This enables one for a quick and accurate estimation of the elastic constants using very small samples.

  5. Accurate prediction of lattice energies and structures of molecular crystals with molecular quantum chemistry methods.

    PubMed

    Fang, Tao; Li, Wei; Gu, Fangwei; Li, Shuhua

    2015-01-13

    We extend the generalized energy-based fragmentation (GEBF) approach to molecular crystals under periodic boundary conditions (PBC), and we demonstrate the performance of the method for a variety of molecular crystals. With this approach, the lattice energy of a molecular crystal can be obtained from the energies of a series of embedded subsystems, which can be computed with existing advanced molecular quantum chemistry methods. The use of the field compensation method allows the method to take long-range electrostatic interaction of the infinite crystal environment into account and make the method almost translationally invariant. The computational cost of the present method scales linearly with the number of molecules in the unit cell. Illustrative applications demonstrate that the PBC-GEBF method with explicitly correlated quantum chemistry methods is capable of providing accurate descriptions on the lattice energies and structures for various types of molecular crystals. In addition, this approach can be employed to quantify the contributions of various intermolecular interactions to the theoretical lattice energy. Such qualitative understanding is very useful for rational design of molecular crystals. PMID:26574207

  6. A numerical investigation of the finite element method in compressible primitive variable Navier-Stokes flow

    NASA Technical Reports Server (NTRS)

    Cook, C. H.

    1977-01-01

    The results of a comprehensive numerical investigation of the basic capabilities of the finite element method (FEM) for numerical solution of compressible flow problems governed by the two-dimensional and axis-symmetric Navier-Stokes equations in primitive variables are presented. The strong and weak points of the method as a tool for computational fluid dynamics are considered. The relation of the linear element finite element method to finite difference methods (FDM) is explored. The calculation of free shear layer and separated flows over aircraft boattail afterbodies with plume simulators indicate the strongest assets of the method are its capabilities for reliable and accurate calculation employing variable grids which readily approximate complex geometry and capably adapt to the presence of diverse regions of large solution gradients without the necessity of domain transformation.

  7. Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis.

    PubMed

    Abbasi, Mahdi

    2014-01-01

    Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N (2)log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR. PMID:24696808

  8. Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis

    PubMed Central

    Abbasi, Mahdi

    2014-01-01

    Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N2log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR. PMID:24696808

  9. River-Network Numerical Model Base on Flux Difference Split Method

    NASA Astrophysics Data System (ADS)

    Xiang, X. H.; Wu, X. L.; Wang, C. H.

    2012-04-01

    The paper proposes an implementation of river-network numerical model in computational hydraulics study. The numerical basis of the model is the high resolution method which was usually used in gas dynamics. A high accurate numerical scheme for saint-venant was introduced base on flux difference split method, coupled with wave transportation, Limiter and entropy fixed. Two different problems were discussed for the model, the first is the method for construct the boundary conditions and the second is the method for connecting the network. A partial flux difference split method was employed for the discrete on boundary; the characteristic direction is critical factor to decide which partial to use. Among network coupling process, conservation laws was applied including mass conservation and energy conservation for all river connection points. The scheme can keep high accurate and good stability in the mean time. The present numerical method was applied to two different benchmark problems, one is ideal dam-break and another is irregular channel, both reflected that the introduced method was confirmed to be effective. And then a real river-network was tested, the comparison of observation and the numerical results show the high reliable of the introduced model. This research was supported by the National Natural Science Foundation of China (No. 51009045; 40930635; 41001011; 41101018; 51079038), the National Key Program for Developing Basic Science (No. 2009CB421105), the Fundamental Research Funds for the Central Universities (No. 2009B06614; 2010B00414), the National Non Profit Research Program of China (No. 200905013-8; 201101024; 20101224).

  10. Fast Geometric Method for Calculating Accurate Minimum Orbit Intersection Distances (MOIDs)

    NASA Astrophysics Data System (ADS)

    Wiźniowski, T.; Rickman, H.

    2013-06-01

    We present a new method to compute Minimum Orbit Intersection Distances (MOIDs) for arbitrary pairs of heliocentric orbits and compare it with Giovanni Gronchi's algebraic method. Our procedure is numerical and iterative, and the MOID configuration is found by geometric scanning and tuning. A basic element is the meridional plane, used for initial scanning, which contains one of the objects and is perpendicular to the orbital plane of the other. Our method also relies on an efficient tuning technique in order to zoom in on the MOID configuration, starting from the first approximation found by scanning. We work with high accuracy and take special care to avoid the risk of missing the MOID, which is inherent to our type of approach. We demonstrate that our method is both fast, reliable and flexible. It is freely available and its source Fortran code downloadable via our web page.

  11. Accurate reporting of adherence to inhaled therapies in adults with cystic fibrosis: methods to calculate “normative adherence”

    PubMed Central

    Hoo, Zhe Hui; Curley, Rachael; Campbell, Michael J; Walters, Stephen J; Hind, Daniel; Wildman, Martin J

    2016-01-01

    Background Preventative inhaled treatments in cystic fibrosis will only be effective in maintaining lung health if used appropriately. An accurate adherence index should therefore reflect treatment effectiveness, but the standard method of reporting adherence, that is, as a percentage of the agreed regimen between clinicians and people with cystic fibrosis, does not account for the appropriateness of the treatment regimen. We describe two different indices of inhaled therapy adherence for adults with cystic fibrosis which take into account effectiveness, that is, “simple” and “sophisticated” normative adherence. Methods to calculate normative adherence Denominator adjustment involves fixing a minimum appropriate value based on the recommended therapy given a person’s characteristics. For simple normative adherence, the denominator is determined by the person’s Pseudomonas status. For sophisticated normative adherence, the denominator is determined by the person’s Pseudomonas status and history of pulmonary exacerbations over the previous year. Numerator adjustment involves capping the daily maximum inhaled therapy use at 100% so that medication overuse does not artificially inflate the adherence level. Three illustrative cases Case A is an example of inhaled therapy under prescription based on Pseudomonas status resulting in lower simple normative adherence compared to unadjusted adherence. Case B is an example of inhaled therapy under-prescription based on previous exacerbation history resulting in lower sophisticated normative adherence compared to unadjusted adherence and simple normative adherence. Case C is an example of nebulizer overuse exaggerating the magnitude of unadjusted adherence. Conclusion Different methods of reporting adherence can result in different magnitudes of adherence. We have proposed two methods of standardizing the calculation of adherence which should better reflect treatment effectiveness. The value of these indices can

  12. Parente2: a fast and accurate method for detecting identity by descent

    PubMed Central

    Rodriguez, Jesse M.; Bercovici, Sivan; Huang, Lin; Frostig, Roy; Batzoglou, Serafim

    2015-01-01

    Identity-by-descent (IBD) inference is the problem of establishing a genetic connection between two individuals through a genomic segment that is inherited by both individuals from a recent common ancestor. IBD inference is an important preceding step in a variety of population genomic studies, ranging from demographic studies to linking genomic variation with phenotype and disease. The problem of accurate IBD detection has become increasingly challenging with the availability of large collections of human genotypes and genomes: Given a cohort’s size, a quadratic number of pairwise genome comparisons must be performed. Therefore, computation time and the false discovery rate can also scale quadratically. To enable accurate and efficient large-scale IBD detection, we present Parente2, a novel method for detecting IBD segments. Parente2 is based on an embedded log-likelihood ratio and uses a model that accounts for linkage disequilibrium by explicitly modeling haplotype frequencies. Parente2 operates directly on genotype data without the need to phase data prior to IBD inference. We evaluate Parente2’s performance through extensive simulations using real data, and we show that it provides substantially higher accuracy compared to previous state-of-the-art methods while maintaining high computational efficiency. PMID:25273070

  13. Numerical modeling of magnetic induction tomography using the impedance method.

    PubMed

    Ramos, Airton; Wolff, Julia G B

    2011-02-01

    This article discusses the impedance method in the forward calculation in magnetic induction tomography (MIT). Magnetic field and eddy current distributions were obtained numerically for a sphere in the field of a coil and were compared with an analytical model. Additionally, numerical and experimental results for phase sensitivity in MIT were obtained and compared for a cylindrical object in a planar array of sensors. The results showed that the impedance method provides results that agree very well with reality in the frequency range from 100 kHz to 20 MHz and for low conductivity objects (10 S/m or less). This opens the possibility of using this numerical approach in image reconstruction in MIT. PMID:21229327

  14. An unconditionally stable method for numerically solving solar sail spacecraft equations of motion

    NASA Astrophysics Data System (ADS)

    Karwas, Alex

    Solar sails use the endless supply of the Sun's radiation to propel spacecraft through space. The sails use the momentum transfer from the impinging solar radiation to provide thrust to the spacecraft while expending zero fuel. Recently, the first solar sail spacecraft, or sailcraft, named IKAROS completed a successful mission to Venus and proved the concept of solar sail propulsion. Sailcraft experimental data is difficult to gather due to the large expenses of space travel, therefore, a reliable and accurate computational method is needed to make the process more efficient. Presented in this document is a new approach to simulating solar sail spacecraft trajectories. The new method provides unconditionally stable numerical solutions for trajectory propagation and includes an improved physical description over other methods. The unconditional stability of the new method means that a unique numerical solution is always determined. The improved physical description of the trajectory provides a numerical solution and time derivatives that are continuous throughout the entire trajectory. The error of the continuous numerical solution is also known for the entire trajectory. Optimal control for maximizing thrust is also provided within the framework of the new method. Verification of the new approach is presented through a mathematical description and through numerical simulations. The mathematical description provides details of the sailcraft equations of motion, the numerical method used to solve the equations, and the formulation for implementing the equations of motion into the numerical solver. Previous work in the field is summarized to show that the new approach can act as a replacement to previous trajectory propagation methods. A code was developed to perform the simulations and it is also described in this document. Results of the simulations are compared to the flight data from the IKAROS mission. Comparison of the two sets of data show that the new approach

  15. Melt-rock reaction in the asthenospheric mantle: Perspectives from high-order accurate numerical simulations in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Tirupathi, S.; Schiemenz, A. R.; Liang, Y.; Parmentier, E.; Hesthaven, J.

    2013-12-01

    The style and mode of melt migration in the mantle are important to the interpretation of basalts erupted on the surface. Both grain-scale diffuse porous flow and channelized melt migration have been proposed. To better understand the mechanisms and consequences of melt migration in a heterogeneous mantle, we have undertaken a numerical study of reactive dissolution in an upwelling and viscously deformable mantle where solubility of pyroxene increases upwards. Our setup is similar to that described in [1], except we use a larger domain size in 2D and 3D and a new numerical method. To enable efficient simulations in 3D through parallel computing, we developed a high-order accurate numerical method for the magma dynamics problem using discontinuous Galerkin methods and constructed the problem using the numerical library deal.II [2]. Linear stability analyses of the reactive dissolution problem reveal three dynamically distinct regimes [3] and the simulations reported in this study were run in the stable regime and the unstable wave regime where small perturbations in porosity grows periodically. The wave regime is more relevant to melt migration beneath the mid-ocean ridges but computationally more challenging. Extending the 2D simulations in the stable regime in [1] to 3D using various combinations of sustained perturbations in porosity at the base of the upwelling column (which may result from a viened mantle), we show the geometry and distribution of dunite channel and high-porosity melt channels are highly correlated with inflow perturbation through superposition. Strong nonlinear interactions among compaction, dissolution, and upwelling give rise to porosity waves and high-porosity melt channels in the wave regime. These compaction-dissolution waves have well organized but time-dependent structures in the lower part of the simulation domain. High-porosity melt channels nucleate along nodal lines of the porosity waves, growing downwards. The wavelength scales

  16. Generalized weighted ratio method for accurate turbidity measurement over a wide range.

    PubMed

    Liu, Hongbo; Yang, Ping; Song, Hong; Guo, Yilu; Zhan, Shuyue; Huang, Hui; Wang, Hangzhou; Tao, Bangyi; Mu, Quanquan; Xu, Jing; Li, Dejun; Chen, Ying

    2015-12-14

    Turbidity measurement is important for water quality assessment, food safety, medicine, ocean monitoring, etc. In this paper, a method that accurately estimates the turbidity over a wide range is proposed, where the turbidity of the sample is represented as a weighted ratio of the scattered light intensities at a series of angles. An improvement in the accuracy is achieved by expanding the structure of the ratio function, thus adding more flexibility to the turbidity-intensity fitting. Experiments have been carried out with an 850 nm laser and a power meter fixed on a turntable to measure the light intensity at different angles. The results show that the relative estimation error of the proposed method is 0.58% on average for a four-angle intensity combination for all test samples with a turbidity ranging from 160 NTU to 4000 NTU. PMID:26699060

  17. Accurate and rapid optical characterization of an anisotropic guided structure based on a neural method.

    PubMed

    Robert, Stéphane; Battie, Yann; Jamon, Damien; Royer, Francois

    2007-04-10

    Optimal performances of integrated optical devices are obtained by the use of an accurate and reliable characterization method. The parameters of interest, i.e., optical indices and thickness of the waveguide structure, are calculated from effective indices by means of an inversion procedure. We demonstrate how an artificial neural network can achieve such a process. The artificial neural network used is a multilayer perceptron. The first result concerns a simulated anisotropic waveguide. The accuracy in the determination of optical indices and waveguide thickness is 5 x 10(-5) and 4 nm, respectively. Then an experimental application on a silica-titania thin film is performed. In addition, effective indices are measured by m-lines spectroscopy. Finally, a comparison with a classical optimization algorithm demonstrates the robustness of the neural method. PMID:17384718

  18. RAId_DbS: Method for Peptide ID using Database Search with Accurate Statistics

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Ogurtsov, Aleksey; Yu, Yi-Kuo

    2007-03-01

    The key to proteomics studies, essential in systems biology, is peptide identification. Under tandem mass spectrometry, each spectrum generated consists of a list of mass/charge peaks along with their intensities. Software analysis is then required to identify from the spectrum peptide candidates that best interpret the spectrum. The library search, which compares the spectral peaks against theoretical peaks generated by each peptide in a library, is among the most popular methods. This method, although robust, lacks good quantitative statistical underpinning. As we show, many library search algorithms suffer from statistical instability. The need for a better statistical basis prompted us to develop RAId_DbS. Taking into account the skewness in the peak intensity distribution while scoring peptides, RAId_DbS provides an accurate statistical significance assignment to each peptide candidate. RAId_DbS will be a valuable tool especially when one intends to identify proteins through peptide identifications.

  19. Spectroscopic Method for Fast and Accurate Group A Streptococcus Bacteria Detection.

    PubMed

    Schiff, Dillon; Aviv, Hagit; Rosenbaum, Efraim; Tischler, Yaakov R

    2016-02-16

    Rapid and accurate detection of pathogens is paramount to human health. Spectroscopic techniques have been shown to be viable methods for detecting various pathogens. Enhanced methods of Raman spectroscopy can discriminate unique bacterial signatures; however, many of these require precise conditions and do not have in vivo replicability. Common biological detection methods such as rapid antigen detection tests have high specificity but do not have high sensitivity. Here we developed a new method of bacteria detection that is both highly specific and highly sensitive by combining the specificity of antibody staining and the sensitivity of spectroscopic characterization. Bacteria samples, treated with a fluorescent antibody complex specific to Streptococcus pyogenes, were volumetrically normalized according to their Raman bacterial signal intensity and characterized for fluorescence, eliciting a positive result for samples containing Streptococcus pyogenes and a negative result for those without. The normalized fluorescence intensity of the Streptococcus pyogenes gave a signal that is up to 16.4 times higher than that of other bacteria samples for bacteria stained in solution and up to 12.7 times higher in solid state. This method can be very easily replicated for other bacteria species using suitable antibody-dye complexes. In addition, this method shows viability for in vivo detection as it requires minute amounts of bacteria, low laser excitation power, and short integration times in order to achieve high signal. PMID:26752013

  20. Highly accurate retrieval method of Japanese document images through a combination of morphological analysis and OCR

    NASA Astrophysics Data System (ADS)

    Katsuyama, Yutaka; Takebe, Hiroaki; Kurokawa, Koji; Saitoh, Takahiro; Naoi, Satoshi

    2001-12-01

    We have developed a method that allows Japanese document images to be retrieved more accurately by using OCR character candidate information and a conventional plain text search engine. In this method, the document image is first recognized by normal OCR to produce text. Keyword areas are then estimated from the normal OCR produced text through morphological analysis. A lattice of candidate- character codes is extracted from these areas, and then character strings are extracted from the lattice using a word-matching method in noun areas and a K-th DP-matching method in undefined word areas. Finally, these extracted character strings are added to the normal OCR produced text to improve document retrieval accuracy when u sing a conventional plain text search engine. Experimental results from searches of 49 OHP sheet images revealed that our method has a high recall rate of 98.2%, compared to 90.3% with a conventional method using only normal OCR produced text, while requiring about the same processing time as normal OCR.

  1. Evaluating Mesoscale Numerical Weather Predictions and Spatially Distributed Meteorologic Forcing Data for Developing Accurate SWE Forecasts over Large Mountain Basins

    NASA Astrophysics Data System (ADS)

    Hedrick, A. R.; Marks, D. G.; Winstral, A. H.; Marshall, H. P.

    2014-12-01

    The ability to forecast snow water equivalent, or SWE, in mountain catchments would benefit many different communities ranging from avalanche hazard mitigation to water resource management. Historical model runs of Isnobal, the physically based energy balance snow model, have been produced over the 2150 km2 Boise River Basin for water years 2012 - 2014 at 100-meter resolution. Spatially distributed forcing parameters such as precipitation, wind, and relative humidity are generated from automated weather stations located throughout the watershed, and are supplied to Isnobal at hourly timesteps. Similarly, the Weather Research & Forecasting (WRF) Model provides hourly predictions of the same forcing parameters from an atmospheric physics perspective. This work aims to quantitatively compare WRF model output to the spatial meteorologic fields developed to force Isnobal, with the hopes of eventually using WRF predictions to create accurate hourly forecasts of SWE over a large mountainous basin.

  2. [A New Method of Accurately Extracting Spectral Values for Discrete Sampling Points].

    PubMed

    Lü, Zhen-zhen; Liu, Guang-ming; Yang, Jin-song

    2015-08-01

    In the establishment of remote sensing information inversion model, the actual measured data of discrete sampling points and the corresponding spectrum data to pixels of remote sensing image, are used to establish the relation, thus to realize the goal of information retrieval. Accurate extraction of spectrum value is very important to establish the remote sensing inversion mode. Converting target spot layer to ROI (region of interest) and then saving the ROI as ASCII is one of the methods that researchers often used to extract the spectral values. Analyzing the coordinate and spectrum values extracted using original coordinate in ENVI, we found that the extracted and original coordinate were not inconsistent and part of spectrum values not belong to the pixel containing the sampling point. The inversion model based on the above information cannot really reflect relationship between the target properties and spectral values; so that the model is meaningless. We equally divided the pixel into four parts and summed up the law. It was found that only when the sampling points distributed in the upper left corner of pixels, the extracted values were correct. On the basis of the above methods, this paper systematically studied the principle of extraction target coordinate and spectral values, and summarized the rule. A new method for extracting spectral parameters of the pixel that sampling point located in the environment of ENVI software. Firstly, pixel sampling point coordinates for any of the four corner points were extracted by the sample points with original coordinate in ENVI. Secondly, the sampling points were judged in which partition of pixel by comparing the absolute values of difference longitude and latitude of the original and extraction coordinates. Lastly, all points were adjusted to the upper left corner of pixels by symmetry principle and spectrum values were extracted by the same way in the first step. The results indicated that the extracted spectrum

  3. Exploring the Use of Discontinuous Galerkin Methods for Numerical Relativity

    NASA Astrophysics Data System (ADS)

    Hebert, Francois; Kidder, Lawrence; Teukolsky, Saul; SXS Collaboration

    2015-04-01

    The limited accuracy of relativistic hydrodynamic simulations constrains our insight into several important research problems, including among others our ability to generate accurate template waveforms for black hole-neutron star mergers, or our understanding of supernova explosion mechanisms. In many codes the algorithms used to evolve the matter, based on the finite volume method, struggle to reach the desired accuracy. We aim to show improved accuracy by using a discontinuous Galerkin method. This method's attractiveness comes from its combination of spectral convergence properties for smooth solutions and robust stability properties for shocks. We present the status of our work implementing a testbed GR-hydro code using discontinuous Galerkin.

  4. COMPARING NUMERICAL METHODS FOR ISOTHERMAL MAGNETIZED SUPERSONIC TURBULENCE

    SciTech Connect

    Kritsuk, Alexei G.; Collins, David; Norman, Michael L.; Xu Hao E-mail: dccollins@lanl.gov

    2011-08-10

    Many astrophysical applications involve magnetized turbulent flows with shock waves. Ab initio star formation simulations require a robust representation of supersonic turbulence in molecular clouds on a wide range of scales imposing stringent demands on the quality of numerical algorithms. We employ simulations of supersonic super-Alfvenic turbulence decay as a benchmark test problem to assess and compare the performance of nine popular astrophysical MHD methods actively used to model star formation. The set of nine codes includes: ENZO, FLASH, KT-MHD, LL-MHD, PLUTO, PPML, RAMSES, STAGGER, and ZEUS. These applications employ a variety of numerical approaches, including both split and unsplit, finite difference and finite volume, divergence preserving and divergence cleaning, a variety of Riemann solvers, and a range of spatial reconstruction and time integration techniques. We present a comprehensive set of statistical measures designed to quantify the effects of numerical dissipation in these MHD solvers. We compare power spectra for basic fields to determine the effective spectral bandwidth of the methods and rank them based on their relative effective Reynolds numbers. We also compare numerical dissipation for solenoidal and dilatational velocity components to check for possible impacts of the numerics on small-scale density statistics. Finally, we discuss the convergence of various characteristics for the turbulence decay test and the impact of various components of numerical schemes on the accuracy of solutions. The nine codes gave qualitatively the same results, implying that they are all performing reasonably well and are useful for scientific applications. We show that the best performing codes employ a consistently high order of accuracy for spatial reconstruction of the evolved fields, transverse gradient interpolation, conservation law update step, and Lorentz force computation. The best results are achieved with divergence-free evolution of the

  5. An accurate clone-based haplotyping method by overlapping pool sequencing.

    PubMed

    Li, Cheng; Cao, Changchang; Tu, Jing; Sun, Xiao

    2016-07-01

    Chromosome-long haplotyping of human genomes is important to identify genetic variants with differing gene expression, in human evolution studies, clinical diagnosis, and other biological and medical fields. Although several methods have realized haplotyping based on sequencing technologies or population statistics, accuracy and cost are factors that prohibit their wide use. Borrowing ideas from group testing theories, we proposed a clone-based haplotyping method by overlapping pool sequencing. The clones from a single individual were pooled combinatorially and then sequenced. According to the distinct pooling pattern for each clone in the overlapping pool sequencing, alleles for the recovered variants could be assigned to their original clones precisely. Subsequently, the clone sequences could be reconstructed by linking these alleles accordingly and assembling them into haplotypes with high accuracy. To verify the utility of our method, we constructed 130 110 clones in silico for the individual NA12878 and simulated the pooling and sequencing process. Ultimately, 99.9% of variants on chromosome 1 that were covered by clones from both parental chromosomes were recovered correctly, and 112 haplotype contigs were assembled with an N50 length of 3.4 Mb and no switch errors. A comparison with current clone-based haplotyping methods indicated our method was more accurate. PMID:27095193

  6. An accurate clone-based haplotyping method by overlapping pool sequencing

    PubMed Central

    Li, Cheng; Cao, Changchang; Tu, Jing; Sun, Xiao

    2016-01-01

    Chromosome-long haplotyping of human genomes is important to identify genetic variants with differing gene expression, in human evolution studies, clinical diagnosis, and other biological and medical fields. Although several methods have realized haplotyping based on sequencing technologies or population statistics, accuracy and cost are factors that prohibit their wide use. Borrowing ideas from group testing theories, we proposed a clone-based haplotyping method by overlapping pool sequencing. The clones from a single individual were pooled combinatorially and then sequenced. According to the distinct pooling pattern for each clone in the overlapping pool sequencing, alleles for the recovered variants could be assigned to their original clones precisely. Subsequently, the clone sequences could be reconstructed by linking these alleles accordingly and assembling them into haplotypes with high accuracy. To verify the utility of our method, we constructed 130 110 clones in silico for the individual NA12878 and simulated the pooling and sequencing process. Ultimately, 99.9% of variants on chromosome 1 that were covered by clones from both parental chromosomes were recovered correctly, and 112 haplotype contigs were assembled with an N50 length of 3.4 Mb and no switch errors. A comparison with current clone-based haplotyping methods indicated our method was more accurate. PMID:27095193

  7. A highly accurate method for the determination of mass and center of mass of a spacecraft

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.; Trubert, M. R.; Egwuatu, A.

    1978-01-01

    An extremely accurate method for the measurement of mass and the lateral center of mass of a spacecraft has been developed. The method was needed for the Voyager spacecraft mission requirement which limited the uncertainty in the knowledge of lateral center of mass of the spacecraft system weighing 750 kg to be less than 1.0 mm (0.04 in.). The method consists of using three load cells symmetrically located at 120 deg apart on a turntable with respect to the vertical axis of the spacecraft and making six measurements for each load cell. These six measurements are taken by cyclic rotations of the load cell turntable and of the spacecraft, about the vertical axis of the measurement fixture. This method eliminates all alignment, leveling, and load cell calibration errors for the lateral center of mass determination, and permits a statistical best fit of the measurement data. An associated data reduction computer program called MASCM has been written to implement this method and has been used for the Voyager spacecraft.

  8. A novel gas-droplet numerical method for spray combustion

    NASA Technical Reports Server (NTRS)

    Chen, C. P.; Shang, H. M.; Jiang, Y.

    1991-01-01

    This paper presents a non-iterative numerical technique for computing time-dependent gas-droplet flows. The method is a fully-interacting combination of Eulerian fluid and Lagrangian particle calculation. The interaction calculations between the two phases are formulated on a pressure-velocity coupling procedure based on the operator-splitting technique. This procedure eliminates the global iterations required in the conventional particle-source-in-cell (PSIC) procedure. Turbulent dispersion calculations are treated by a stochastic procedure. Numerical calculations and comparisons with available experimental data, as well as efficiency assessments are given for some sprays typical of spray combustion applications.

  9. Accurate and automatic extrinsic calibration method for blade measurement system integrated by different optical sensors

    NASA Astrophysics Data System (ADS)

    He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu

    2014-11-01

    Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.

  10. A numerical homogenization method for heterogeneous, anisotropic elastic media based on multiscale theory

    DOE PAGESBeta

    Gao, Kai; Chung, Eric T.; Gibson, Richard L.; Fu, Shubin; Efendiev, Yalchin

    2015-06-05

    The development of reliable methods for upscaling fine scale models of elastic media has long been an important topic for rock physics and applied seismology. Several effective medium theories have been developed to provide elastic parameters for materials such as finely layered media or randomly oriented or aligned fractures. In such cases, the analytic solutions for upscaled properties can be used for accurate prediction of wave propagation. However, such theories cannot be applied directly to homogenize elastic media with more complex, arbitrary spatial heterogeneity. We therefore propose a numerical homogenization algorithm based on multiscale finite element methods for simulating elasticmore » wave propagation in heterogeneous, anisotropic elastic media. Specifically, our method used multiscale basis functions obtained from a local linear elasticity problem with appropriately defined boundary conditions. Homogenized, effective medium parameters were then computed using these basis functions, and the approach applied a numerical discretization that is similar to the rotated staggered-grid finite difference scheme. Comparisons of the results from our method and from conventional, analytical approaches for finely layered media showed that the homogenization reliably estimated elastic parameters for this simple geometry. Additional tests examined anisotropic models with arbitrary spatial heterogeneity where the average size of the heterogeneities ranged from several centimeters to several meters, and the ratio between the dominant wavelength and the average size of the arbitrary heterogeneities ranged from 10 to 100. Comparisons to finite-difference simulations proved that the numerical homogenization was equally accurate for these complex cases.« less

  11. A numerical homogenization method for heterogeneous, anisotropic elastic media based on multiscale theory

    SciTech Connect

    Gao, Kai; Chung, Eric T.; Gibson, Richard L.; Fu, Shubin; Efendiev, Yalchin

    2015-06-05

    The development of reliable methods for upscaling fine scale models of elastic media has long been an important topic for rock physics and applied seismology. Several effective medium theories have been developed to provide elastic parameters for materials such as finely layered media or randomly oriented or aligned fractures. In such cases, the analytic solutions for upscaled properties can be used for accurate prediction of wave propagation. However, such theories cannot be applied directly to homogenize elastic media with more complex, arbitrary spatial heterogeneity. We therefore propose a numerical homogenization algorithm based on multiscale finite element methods for simulating elastic wave propagation in heterogeneous, anisotropic elastic media. Specifically, our method used multiscale basis functions obtained from a local linear elasticity problem with appropriately defined boundary conditions. Homogenized, effective medium parameters were then computed using these basis functions, and the approach applied a numerical discretization that is similar to the rotated staggered-grid finite difference scheme. Comparisons of the results from our method and from conventional, analytical approaches for finely layered media showed that the homogenization reliably estimated elastic parameters for this simple geometry. Additional tests examined anisotropic models with arbitrary spatial heterogeneity where the average size of the heterogeneities ranged from several centimeters to several meters, and the ratio between the dominant wavelength and the average size of the arbitrary heterogeneities ranged from 10 to 100. Comparisons to finite-difference simulations proved that the numerical homogenization was equally accurate for these complex cases.

  12. The TAB method for numerical calculation of spray droplet breakup

    NASA Astrophysics Data System (ADS)

    Orourke, P. J.; Amsden, A. A.

    A short history is given of the major milestones in the development of the stochastic particle method for calculating liquid fuel sprays. The most recent advance has been the discovery of the importance of drop breakup in engine sprays. A new method, called TAB, for calculating drop breakup is presented. Some theoretical properties of the method are derived; its numerical implementation in the computer program KIVA is described; and comparisons are presented between TAB-method calculations and experiments and calculations using another breakup model.

  13. Simple numerical method for predicting steady compressible flows

    NASA Technical Reports Server (NTRS)

    Vonlavante, Ernst; Nelson, N. Duane

    1986-01-01

    A numerical method for solving the isenthalpic form of the governing equations for compressible viscous and inviscid flows was developed. The method was based on the concept of flux vector splitting in its implicit form. The method was tested on several demanding inviscid and viscous configurations. Two different forms of the implicit operator were investigated. The time marching to steady state was accelerated by the implementation of the multigrid procedure. Its various forms very effectively increased the rate of convergence of the present scheme. High quality steady state results were obtained in most of the test cases; these required only short computational times due to the relative efficiency of the basic method.

  14. Singularity Preserving Numerical Methods for Boundary Integral Equations

    NASA Technical Reports Server (NTRS)

    Kaneko, Hideaki (Principal Investigator)

    1996-01-01

    In the past twelve months (May 8, 1995 - May 8, 1996), under the cooperative agreement with Division of Multidisciplinary Optimization at NASA Langley, we have accomplished the following five projects: a note on the finite element method with singular basis functions; numerical quadrature for weakly singular integrals; superconvergence of degenerate kernel method; superconvergence of the iterated collocation method for Hammersteion equations; and singularity preserving Galerkin method for Hammerstein equations with logarithmic kernel. This final report consists of five papers describing these projects. Each project is preceeded by a brief abstract.

  15. A fast and accurate method for computing the Sunyaev-Zel'dovich signal of hot galaxy clusters

    NASA Astrophysics Data System (ADS)

    Chluba, Jens; Nagai, Daisuke; Sazonov, Sergey; Nelson, Kaylea

    2012-10-01

    New-generation ground- and space-based cosmic microwave background experiments have ushered in discoveries of massive galaxy clusters via the Sunyaev-Zel'dovich (SZ) effect, providing a new window for studying cluster astrophysics and cosmology. Many of the newly discovered, SZ-selected clusters contain hot intracluster plasma (kTe ≳ 10 keV) and exhibit disturbed morphology, indicative of frequent mergers with large peculiar velocity (v ≳ 1000 km s-1). It is well known that for the interpretation of the SZ signal from hot, moving galaxy clusters, relativistic corrections must be taken into account, and in this work, we present a fast and accurate method for computing these effects. Our approach is based on an alternative derivation of the Boltzmann collision term which provides new physical insight into the sources of different kinematic corrections in the scattering problem. In contrast to previous works, this allows us to obtain a clean separation of kinematic and scattering terms. We also briefly mention additional complications connected with kinematic effects that should be considered when interpreting future SZ data for individual clusters. One of the main outcomes of this work is SZPACK, a numerical library which allows very fast and precise (≲0.001 per cent at frequencies hν ≲ 20kTγ) computation of the SZ signals up to high electron temperature (kTe ≃ 25 keV) and large peculiar velocity (v/c ≃ 0.01). The accuracy is well beyond the current and future precision of SZ observations and practically eliminates uncertainties which are usually overcome with more expensive numerical evaluation of the Boltzmann collision term. Our new approach should therefore be useful for analysing future high-resolution, multifrequency SZ observations as well as computing the predicted SZ effect signals from numerical simulations.

  16. A numerical method for interface problems in elastodynamics

    NASA Technical Reports Server (NTRS)

    Mcghee, D. S.

    1984-01-01

    The numerical implementation of a formulation for a class of interface problems in elastodynamics is discussed. This formulation combines the use of the finite element and boundary integral methods to represent the interior and the exteriro regions, respectively. In particular, the response of a semicylindrical alluvial valley in a homogeneous halfspace to incident antiplane SH waves is considered to determine the accuracy and convergence of the numerical procedure. Numerical results are obtained from several combinations of the incidence angle, frequency of excitation, and relative stiffness between the inclusion and the surrounding halfspace. The results tend to confirm the theoretical estimates that the convergence is of the order H(2) for the piecewise linear elements used. It was also observed that the accuracy descreases as the frequency of excitation increases or as the relative stiffness of the inclusion decreases.

  17. An accurate and linear-scaling method for calculating charge-transfer excitation energies and diabatic couplings

    SciTech Connect

    Pavanello, Michele; Van Voorhis, Troy; Visscher, Lucas; Neugebauer, Johannes

    2013-02-07

    Quantum-mechanical methods that are both computationally fast and accurate are not yet available for electronic excitations having charge transfer character. In this work, we present a significant step forward towards this goal for those charge transfer excitations that take place between non-covalently bound molecules. In particular, we present a method that scales linearly with the number of non-covalently bound molecules in the system and is based on a two-pronged approach: The molecular electronic structure of broken-symmetry charge-localized states is obtained with the frozen density embedding formulation of subsystem density-functional theory; subsequently, in a post-SCF calculation, the full-electron Hamiltonian and overlap matrix elements among the charge-localized states are evaluated with an algorithm which takes full advantage of the subsystem DFT density partitioning technique. The method is benchmarked against coupled-cluster calculations and achieves chemical accuracy for the systems considered for intermolecular separations ranging from hydrogen-bond distances to tens of Angstroms. Numerical examples are provided for molecular clusters comprised of up to 56 non-covalently bound molecules.

  18. A more accurate method for measurement of tuberculocidal activity of disinfectants.

    PubMed Central

    Ascenzi, J M; Ezzell, R J; Wendt, T M

    1987-01-01

    The current Association of Official Analytical Chemists method for testing tuberculocidal activity of disinfectants has been shown to be inaccurate and to have a high degree of variability. An alternate test method is proposed which is more accurate, more precise, and quantitative. A suspension of Mycobacterium bovis BCG was exposed to a variety of disinfectant chemicals and a kill curve was constructed from quantitative data. Data are presented that show the discrepancy between current claims, determined by the Association of Official Analytical Chemists method, of selected commercially available products and claims generated by the proposed method. The effects of different recovery media were examined. The data indicated that Mycobacteria 7H11 and Middlebrook 7H10 agars were equal in recovery of the different chemically treated cells, with Lowenstein-Jensen agar having approximately the same recovery rate but requiring incubation for up to 3 weeks longer for countability. The kill curves generated for several different chemicals were reproducible, as indicated by the standard deviations of the slopes and intercepts of the linear regression curves. PMID:3314707

  19. Distance scaling method for accurate prediction of slowly varying magnetic fields in satellite missions

    NASA Astrophysics Data System (ADS)

    Zacharias, Panagiotis P.; Chatzineofytou, Elpida G.; Spantideas, Sotirios T.; Capsalis, Christos N.

    2016-07-01

    In the present work, the determination of the magnetic behavior of localized magnetic sources from near-field measurements is examined. The distance power law of the magnetic field fall-off is used in various cases to accurately predict the magnetic signature of an equipment under test (EUT) consisting of multiple alternating current (AC) magnetic sources. Therefore, parameters concerning the location of the observation points (magnetometers) are studied towards this scope. The results clearly show that these parameters are independent of the EUT's size and layout. Additionally, the techniques developed in the present study enable the placing of the magnetometers close to the EUT, thus achieving high signal-to-noise ratio (SNR). Finally, the proposed method is verified by real measurements, using a mobile phone as an EUT.

  20. An Inexpensive, Accurate, and Precise Wet-Mount Method for Enumerating Aquatic Viruses

    PubMed Central

    Cunningham, Brady R.; Brum, Jennifer R.; Schwenck, Sarah M.; Sullivan, Matthew B.

    2015-01-01

    Viruses affect biogeochemical cycling, microbial mortality, gene flow, and metabolic functions in diverse environments through infection and lysis of microorganisms. Fundamental to quantitatively investigating these roles is the determination of viral abundance in both field and laboratory samples. One current, widely used method to accomplish this with aquatic samples is the “filter mount” method, in which samples are filtered onto costly 0.02-μm-pore-size ceramic filters for enumeration of viruses by epifluorescence microscopy. Here we describe a cost-effective (ca. 500-fold-lower materials cost) alternative virus enumeration method in which fluorescently stained samples are wet mounted directly onto slides, after optional chemical flocculation of viruses in samples with viral concentrations of <5 × 107 viruses ml−1. The concentration of viruses in the sample is then determined from the ratio of viruses to a known concentration of added microsphere beads via epifluorescence microscopy. Virus concentrations obtained by using this wet-mount method, with and without chemical flocculation, were significantly correlated with, and had precision equivalent to, those obtained by the filter mount method across concentrations ranging from 2.17 × 106 to 1.37 × 108 viruses ml−1 when tested by using cultivated viral isolates and natural samples from marine and freshwater environments. In summary, the wet-mount method is significantly less expensive than the filter mount method and is appropriate for rapid, precise, and accurate enumeration of aquatic viruses over a wide range of viral concentrations (≥1 × 106 viruses ml−1) encountered in field and laboratory samples. PMID:25710369

  1. Numerical simulation methods for the Rouse model in flow

    NASA Astrophysics Data System (ADS)

    Howard, Michael P.; Milner, Scott T.

    2011-11-01

    Simulation of the Rouse model in flow underlies a great variety of numerical investigations of polymer dynamics, in both entangled melts and solutions and in dilute solution. Typically a simple explicit stochastic Euler method is used to evolve the Rouse model. Here we compare this approach to an operator splitting method, which splits the evolution operator into stochastic linear and deterministic nonlinear parts and takes advantage of an analytical solution for the linear Rouse model in terms of the noise history. We show that this splitting method has second-order weak convergence, whereas the Euler method has only first-order weak convergence. Furthermore, the splitting method is unconditionally stable, in contrast to the limited stability range of the Euler method. Similar splitting methods are applicable to a broad class of problems in stochastic dynamics in which noise competes with ordering and flow to determine steady-state order parameter structures.

  2. Numerical Polynomial Homotopy Continuation Method and String Vacua

    DOE PAGESBeta

    Mehta, Dhagash

    2011-01-01

    Finding vmore » acua for the four-dimensional effective theories for supergravity which descend from flux compactifications and analyzing them according to their stability is one of the central problems in string phenomenology. Except for some simple toy models, it is, however, difficult to find all the vacua analytically. Recently developed algorithmic methods based on symbolic computer algebra can be of great help in the more realistic models. However, they suffer from serious algorithmic complexities and are limited to small system sizes. In this paper, we review a numerical method called the numerical polynomial homotopy continuation (NPHC) method, first used in the areas of lattice field theories, which by construction finds all of the vacua of a given potential that is known to have only isolated solutions. The NPHC method is known to suffer from no major algorithmic complexities and is embarrassingly parallelizable , and hence its applicability goes way beyond the existing symbolic methods. We first solve a simple toy model as a warm-up example to demonstrate the NPHC method at work. We then show that all the vacua of a more complicated model of a compactified M theory model, which has an S U ( 3 ) structure, can be obtained by using a desktop machine in just about an hour, a feat which was reported to be prohibitively difficult by the existing symbolic methods. Finally, we compare the various technicalities between the two methods.« less

  3. Projected discrete ordinates methods for numerical transport problems

    SciTech Connect

    Larsen, E.W.

    1985-01-01

    A class of Projected Discrete-Ordinates (PDO) methods is described for obtaining iterative solutions of discrete-ordinates problems with convergence rates comparable to those observed using Diffusion Synthetic Acceleration (DSA). The spatially discretized PDO solutions are generally not equal to the DSA solutions, but unlike DSA, which requires great care in the use of spatial discretizations to preserve stability, the PDO solutions remain stable and rapidly convergent with essentially arbitrary spatial discretizations. Numerical results are presented which illustrate the rapid convergence and the accuracy of solutions obtained using PDO methods with commonplace differencing methods.

  4. Computational methods for aerodynamic design using numerical optimization

    NASA Technical Reports Server (NTRS)

    Peeters, M. F.

    1983-01-01

    Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.

  5. An accurate and nondestructive GC method for determination of cocaine on US paper currency.

    PubMed

    Zuo, Yuegang; Zhang, Kai; Wu, Jingping; Rego, Christopher; Fritz, John

    2008-07-01

    The presence of cocaine on US paper currency has been known for a long time. Banknotes become contaminated during the exchange, storage, and abuse of cocaine. The analysis of cocaine on various denominations of US banknotes in the general circulation can provide law enforcement circles and forensic epidemiologists objective and timely information on epidemiology of illicit drug use and on how to differentiate money contaminated in the general circulation from banknotes used in drug transaction. A simple, nondestructive, and accurate capillary gas chromatographic method has been developed for the determination of cocaine on various denominations of US banknotes in this study. The method comprises a fast ultrasonic extraction using water as a solvent followed by a SPE cleanup process with a C(18) cartridge and capillary GC separation, identification, and quantification. This nondestructive analytical method has been successfully applied to determine the cocaine contamination in US paper currency of all denominations. Standard calibration curve was linear over the concentration range from the LOQ (2.00 ng/mL) to 100 microg/mL and the RSD less than 2.0%. Cocaine was detected in 67% of the circulated banknotes collected in Southeastern Massachusetts in amounts ranging from approximately 2 ng to 49.4 microg per note. On average, $5, 10, 20, and 50 denominations contain higher amounts of cocaine than $1 and 100 denominations of US banknotes. PMID:18646272

  6. A Method for Accurate Reconstructions of the Upper Airway Using Magnetic Resonance Images

    PubMed Central

    Xiong, Huahui; Huang, Xiaoqing; Li, Yong; Li, Jianhong; Xian, Junfang; Huang, Yaqi

    2015-01-01

    Objective The purpose of this study is to provide an optimized method to reconstruct the structure of the upper airway (UA) based on magnetic resonance imaging (MRI) that can faithfully show the anatomical structure with a smooth surface without artificial modifications. Methods MRI was performed on the head and neck of a healthy young male participant in the axial, coronal and sagittal planes to acquire images of the UA. The level set method was used to segment the boundary of the UA. The boundaries in the three scanning planes were registered according to the positions of crossing points and anatomical characteristics using a Matlab program. Finally, the three-dimensional (3D) NURBS (Non-Uniform Rational B-Splines) surface of the UA was constructed using the registered boundaries in all three different planes. Results A smooth 3D structure of the UA was constructed, which captured the anatomical features from the three anatomical planes, particularly the location of the anterior wall of the nasopharynx. The volume and area of every cross section of the UA can be calculated from the constructed 3D model of UA. Conclusions A complete scheme of reconstruction of the UA was proposed, which can be used to measure and evaluate the 3D upper airway accurately. PMID:26066461

  7. Fast and Accurate Microplate Method (Biolog MT2) for Detection of Fusarium Fungicides Resistance/Sensitivity

    PubMed Central

    Frąc, Magdalena; Gryta, Agata; Oszust, Karolina; Kotowicz, Natalia

    2016-01-01

    The need for finding fungicides against Fusarium is a key step in the chemical plant protection and using appropriate chemical agents. Existing, conventional methods of evaluation of Fusarium isolates resistance to fungicides are costly, time-consuming and potentially environmentally harmful due to usage of high amounts of potentially toxic chemicals. Therefore, the development of fast, accurate and effective detection methods for Fusarium resistance to fungicides is urgently required. MT2 microplates (BiologTM) method is traditionally used for bacteria identification and the evaluation of their ability to utilize different carbon substrates. However, to the best of our knowledge, there is no reports concerning the use of this technical tool to determine fungicides resistance of the Fusarium isolates. For this reason, the objectives of this study are to develop a fast method for Fusarium resistance to fungicides detection and to validate the effectiveness approach between both traditional hole-plate and MT2 microplates assays. In presented study MT2 microplate-based assay was evaluated for potential use as an alternative resistance detection method. This was carried out using three commercially available fungicides, containing following active substances: triazoles (tebuconazole), benzimidazoles (carbendazim) and strobilurins (azoxystrobin), in six concentrations (0, 0.0005, 0.005, 0.05, 0.1, 0.2%), for nine selected Fusarium isolates. In this study, the particular concentrations of each fungicides was loaded into MT2 microplate wells. The wells were inoculated with the Fusarium mycelium suspended in PM4-IF inoculating fluid. Before inoculation the suspension was standardized for each isolates into 75% of transmittance. Traditional hole-plate method was used as a control assay. The fungicides concentrations in control method were the following: 0, 0.0005, 0.005, 0.05, 0.5, 1, 2, 5, 10, 25, and 50%. Strong relationships between MT2 microplate and traditional hole

  8. Fast and Accurate Microplate Method (Biolog MT2) for Detection of Fusarium Fungicides Resistance/Sensitivity.

    PubMed

    Frąc, Magdalena; Gryta, Agata; Oszust, Karolina; Kotowicz, Natalia

    2016-01-01

    The need for finding fungicides against Fusarium is a key step in the chemical plant protection and using appropriate chemical agents. Existing, conventional methods of evaluation of Fusarium isolates resistance to fungicides are costly, time-consuming and potentially environmentally harmful due to usage of high amounts of potentially toxic chemicals. Therefore, the development of fast, accurate and effective detection methods for Fusarium resistance to fungicides is urgently required. MT2 microplates (Biolog(TM)) method is traditionally used for bacteria identification and the evaluation of their ability to utilize different carbon substrates. However, to the best of our knowledge, there is no reports concerning the use of this technical tool to determine fungicides resistance of the Fusarium isolates. For this reason, the objectives of this study are to develop a fast method for Fusarium resistance to fungicides detection and to validate the effectiveness approach between both traditional hole-plate and MT2 microplates assays. In presented study MT2 microplate-based assay was evaluated for potential use as an alternative resistance detection method. This was carried out using three commercially available fungicides, containing following active substances: triazoles (tebuconazole), benzimidazoles (carbendazim) and strobilurins (azoxystrobin), in six concentrations (0, 0.0005, 0.005, 0.05, 0.1, 0.2%), for nine selected Fusarium isolates. In this study, the particular concentrations of each fungicides was loaded into MT2 microplate wells. The wells were inoculated with the Fusarium mycelium suspended in PM4-IF inoculating fluid. Before inoculation the suspension was standardized for each isolates into 75% of transmittance. Traditional hole-plate method was used as a control assay. The fungicides concentrations in control method were the following: 0, 0.0005, 0.005, 0.05, 0.5, 1, 2, 5, 10, 25, and 50%. Strong relationships between MT2 microplate and traditional hole

  9. Numerical quadrature methods for integrals of singular periodic functions and their application to singular and weakly singular integral equations

    NASA Technical Reports Server (NTRS)

    Sidi, A.; Israeli, M.

    1986-01-01

    High accuracy numerical quadrature methods for integrals of singular periodic functions are proposed. These methods are based on the appropriate Euler-Maclaurin expansions of trapezoidal rule approximations and their extrapolations. They are used to obtain accurate quadrature methods for the solution of singular and weakly singular Fredholm integral equations. Such periodic equations are used in the solution of planar elliptic boundary value problems, elasticity, potential theory, conformal mapping, boundary element methods, free surface flows, etc. The use of the quadrature methods is demonstrated with numerical examples.

  10. Automatic numerical integration methods for Feynman integrals through 3-loop

    NASA Astrophysics Data System (ADS)

    de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Olagbemi, O.

    2015-05-01

    We give numerical integration results for Feynman loop diagrams through 3-loop such as those covered by Laporta [1]. The methods are based on automatic adaptive integration, using iterated integration and extrapolation with programs from the QUADPACK package, or multivariate techniques from the ParInt package. The Dqags algorithm from QuadPack accommodates boundary singularities of fairly general types. PARINT is a package for multivariate integration layered over MPI (Message Passing Interface), which runs on clusters and incorporates advanced parallel/distributed techniques such as load balancing among processes that may be distributed over a network of nodes. Results are included for 3-loop self-energy diagrams without IR (infra-red) or UV (ultra-violet) singularities. A procedure based on iterated integration and extrapolation yields a novel method of numerical regularization for integrals with UV terms, and is applied to a set of 2-loop self-energy diagrams with UV singularities.

  11. Numerical method for shear bands in ductile metal with inclusions

    SciTech Connect

    Plohr, Jee Yeon N; Plohr, Bradley J

    2010-01-01

    A numerical method for mesoscale simulation of high strain-rate loading of ductile metal containing inclusions is described. Because of small-scale inhomogeneities, such a composite material is prone to localized shear deformation (adiabatic shear bands). The modeling framework is the Generalized Method of Cells of Paley and Aboudi [Mech. Materials, vol. 14, pp. /27-139, 1992], which ensures that the micromechanical response of the material is reflected in the behavior of the composite at the mesoscale. To calculate the effective plastic strain rate when shear bands are present, the analytic and numerical analysis of shear bands by Glimm, Plohr, and Sharp [Mech. Materials, vol. 24, pp. 31-41, 1996] is adapted and extended.

  12. Numerical Method for the Astronomical Almanac and Orbit Calculations

    NASA Astrophysics Data System (ADS)

    Kim, Kap-Sung

    1993-12-01

    We have calculated the astronomical almanac 1994 and simulated the trajectory of a satellite orbit considering all perturbative forces with various initial conditions. In this work, Gauss Jackson multistep integration method has been used to calculate our basic equation of motion with high numerical accuracy. It has been found that our results agree well with the Astronomical Almanac Data distributed by JPL of NASA and the orbit simulations have been carried out with fast speed, stability and excellent round-off error accumulation, comparing with other numerical methods. In order to be carried out our works on almanac and orbit calculations easily by anyone who uses a personal computer, we have made a computer program on graphical user interface to provide various menus for detail works selected by a mouse.

  13. Accurate energy bands calculated by the hybrid quasiparticle self-consistent GW method implemented in the ecalj package

    NASA Astrophysics Data System (ADS)

    Deguchi, Daiki; Sato, Kazunori; Kino, Hiori; Kotani, Takao

    2016-05-01

    We have recently implemented a new version of the quasiparticle self-consistent GW (QSGW) method in the ecalj package released at http://github.com/tkotani/ecalj. Since the new version of the ecalj package is numerically stable and more accurate than the previous versions, we can perform calculations easily without being bothered with tuning input parameters. Here we examine its ability to describe energy band properties, e.g., band-gap energy, eigenvalues at special points, and effective mass, for a variety of semiconductors and insulators. We treat C, Si, Ge, Sn, SiC (in 2H, 3C, and 4H structures), (Al, Ga, In) × (N, P, As, Sb), (Zn, Cd, Mg) × (O, S, Se, Te), SiO2, HfO2, ZrO2, SrTiO3, PbS, PbTe, MnO, NiO, and HgO. We propose that a hybrid QSGW method, where we mix 80% of QSGW and 20% of LDA, gives universally good agreement with experiments for these materials.

  14. Asymptotic and Numerical Methods for Rapidly Rotating Buoyant Flow

    NASA Astrophysics Data System (ADS)

    Grooms, Ian G.

    This thesis documents three investigations carried out in pursuance of a doctoral degree in applied mathematics at the University of Colorado (Boulder). The first investigation concerns the properties of rotating Rayleigh-Benard convection -- thermal convection in a rotating infinite plane layer between two constant-temperature boundaries. It is noted that in certain parameter regimes convective Taylor columns appear which dominate the dynamics, and a semi-analytical model of these is presented. Investigation of the columns and of various other properties of the flow is ongoing. The second investigation concerns the interactions between planetary-scale and mesoscale dynamics in the oceans. Using multiple-scale asymptotics the possible connections between planetary geostrophic and quasigeostrophic dynamics are investigated, and three different systems of coupled equations are derived. Possible use of these equations in conjunction with the method of superparameterization, and extension of the asymptotic methods to the interactions between mesoscale and submesoscale dynamics is ongoing. The third investigation concerns the linear stability properties of semi-implicit methods for the numerical integration of ordinary differential equations, focusing in particular on the linear stability of IMEX (Implicit-Explicit) methods and exponential integrators applied to systems of ordinary differential equations arising in the numerical solution of spatially discretized nonlinear partial differential equations containing both dispersive and dissipative linear terms. While these investigations may seem unrelated at first glance, some reflection shows that they are in fact closely linked. The investigation of rotating convection makes use of single-space, multiple-time-scale asymptotics to deal with dynamics strongly constrained by rotation. Although the context of thermal convection in an infinite layer seems somewhat removed from large-scale ocean dynamics, the asymptotic

  15. A Weight-Averaged Interpolation Method for Coupling Time-Accurate Rarefied and Continuum Flows

    NASA Astrophysics Data System (ADS)

    Diaz, Steven William

    A novel approach to coupling rarefied and continuum flow regimes as a single, hybrid model is introduced. The method borrows from techniques used in the simulation of spray flows to interpolate Lagrangian point-particles onto an Eulerian grid in a weight-averaged sense. A brief overview of traditional methods for modeling both rarefied and continuum domains is given, and a review of the literature regarding rarefied/continuum flow coupling is presented. Details of the theoretical development of the method of weighted interpolation are then described. The method evaluates macroscopic properties at the nodes of a CFD grid via the weighted interpolation of all simulated molecules in a set surrounding the node. The weight factor applied to each simulated molecule is the inverse of the linear distance between it and the given node. During development, the method was applied to several preliminary cases, including supersonic flow over an airfoil, subsonic flow over tandem airfoils, and supersonic flow over a backward facing step; all at low Knudsen numbers. The main thrust of the research centered on the time-accurate expansion of a rocket plume into a near-vacuum. The method proves flexible enough to be used with various flow solvers, demonstrated by the use of Fluent as the continuum solver for the preliminary cases and a NASA-developed Large Eddy Simulation research code, WRLES, for the full lunar model. The method is applicable to a wide range of Mach numbers and is completely grid independent, allowing the rarefied and continuum solvers to be optimized for their respective domains without consideration of the other. The work presented demonstrates the validity, and flexibility of the method of weighted interpolation as a novel concept in the field of hybrid flow coupling. The method marks a significant divergence from current practices in the coupling of rarefied and continuum flow domains and offers a kernel on which to base an ongoing field of research. It has the

  16. [Numerical methods for multi-fluid flows]. Final progress report

    SciTech Connect

    Pozrikidis, C.

    1998-07-21

    The central objective of this research has been to develop efficient numerical methods for computing multi-fluid flows with large interfacial deformations, and apply these methods to study the rheology of suspensions of deformable particles with viscous and non-Newtonian interfacial behavior. The mathematical formulation employs boundary-integral, immersed-boundary, and related numerical methods. Particles of interest include liquid drops with constant surface tension and capsules whose interfaces exhibit viscoelastic and incompressible characteristics. In one family of problems, the author has considered the shear-driven and pressure-driven flow of a suspension of two-dimensional liquid drops with ordered and random structure. In a second series of investigations, the author carried out dynamic simulations of two-dimensional, unbounded, doubly-periodic shear flows with random structure. Another family of problems addresses the deformation of three-dimensional capsules whose interfaces exhibit isotropic surface tension, viscous, elastic, or incompressible behavior, in simple shear flow. The numerical results extend previous asymptotic theories for small deformations and illuminate the mechanism of membrane rupture.

  17. Numerical method for wave forces acting on partially perforated caisson

    NASA Astrophysics Data System (ADS)

    Jiang, Feng; Tang, Xiao-cheng; Jin, Zhao; Zhang, Li; Chen, Hong-zhou

    2015-04-01

    The perforated caisson is widely applied to practical engineering because of its great advantages in effectively wave energy consumption and cost reduction. The attentions of many scientists were paid to the fluid-structure interaction between wave and perforated caisson studies, but until now, most concerns have been put on theoretical analysis and experimental model set up. In this paper, interaction between the wave and the partial perforated caisson in a 2D numerical wave flume is investigated by means of the renewed SPH algorithm, and the mathematical equations are in the form of SPH numerical approximation based on Navier-Stokes equations. The validity of the SPH mathematical method is examined and the simulated results are compared with the results of theoretical models, meanwhile the complex hydrodynamic characteristics when the water particles flow in or out of a wave absorbing chamber are analyzed and the wave pressure distribution of the perforated caisson is also addressed here. The relationship between the ratio of total horizontal force acting on caisson under regular waves and its influence factors is examined. The data show that the numerical calculation of the ratio of total horizontal force meets the empirical regression equation very well. The simulations of SPH about the wave nonlinearity and breaking are briefly depicted in the paper, suggesting that the advantages and great potentiality of the SPH method is significant compared with traditional methods.

  18. An alternative numerical method for the stationary pulsar magnetosphere

    NASA Astrophysics Data System (ADS)

    Takamori, Yohsuke; Okawa, Hirotada; Takamoto, Makoto; Suwa, Yudai

    2014-02-01

    Stationary pulsar magnetospheres in the force-free system are governed by the pulsar equation. In 1999, Contopoulos, Kazanas, and Fendt (hereafter CKF) numerically solved the pulsar equation and obtained a pulsar magnetosphere model called the CKF solution that has both closed and open magnetic field lines. The CKF solution is a successful solution, but it contains a poloidal current sheet that flows along the last open field line. This current sheet is artificially added to make the current system closed. In this paper, we suggest an alternative method to solve the pulsar equation and construct pulsar magnetosphere models without a current sheet. In our method, the pulsar equation is decomposed into Ampère's law and the force-free condition. We numerically solve these equations simultaneously with a fixed poloidal current. As a result, we obtain a pulsar magnetosphere model without a current sheet, which is similar to the CKF solution near the neutron star and has a jet-like structure at a distance along the pole. In addition, we discuss physical properties of the model and find that the force-free condition breaks down in a vicinity of the light cylinder due to dissipation that is included implicitly in the numerical method.

  19. Accurate Evaluation of Quantum Integrals

    NASA Technical Reports Server (NTRS)

    Galant, David C.; Goorvitch, D.

    1994-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schr\\"{o}dinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  20. The instanton method and its numerical implementation in fluid mechanics

    NASA Astrophysics Data System (ADS)

    Grafke, Tobias; Grauer, Rainer; Schäfer, Tobias

    2015-08-01

    A precise characterization of structures occurring in turbulent fluid flows at high Reynolds numbers is one of the last open problems of classical physics. In this review we discuss recent developments related to the application of instanton methods to turbulence. Instantons are saddle point configurations of the underlying path integrals. They are equivalent to minimizers of the related Freidlin-Wentzell action and known to be able to characterize rare events in such systems. While there is an impressive body of work concerning their analytical description, this review focuses on the question on how to compute these minimizers numerically. In a short introduction we present the relevant mathematical and physical background before we discuss the stochastic Burgers equation in detail. We present algorithms to compute instantons numerically by an efficient solution of the corresponding Euler-Lagrange equations. A second focus is the discussion of a recently developed numerical filtering technique that allows to extract instantons from direct numerical simulations. In the following we present modifications of the algorithms to make them efficient when applied to two- or three-dimensional (2D or 3D) fluid dynamical problems. We illustrate these ideas using the 2D Burgers equation and the 3D Navier-Stokes equations.

  1. Accurate Ionization Potentials and Electron Affinities of Acceptor Molecules III: A Benchmark of GW Methods.

    PubMed

    Knight, Joseph W; Wang, Xiaopeng; Gallandi, Lukas; Dolgounitcheva, Olga; Ren, Xinguo; Ortiz, J Vincent; Rinke, Patrick; Körzdörfer, Thomas; Marom, Noa

    2016-02-01

    The performance of different GW methods is assessed for a set of 24 organic acceptors. Errors are evaluated with respect to coupled cluster singles, doubles, and perturbative triples [CCSD(T)] reference data for the vertical ionization potentials (IPs) and electron affinities (EAs), extrapolated to the complete basis set limit. Additional comparisons are made to experimental data, where available. We consider fully self-consistent GW (scGW), partial self-consistency in the Green's function (scGW0), non-self-consistent G0W0 based on several mean-field starting points, and a "beyond GW" second-order screened exchange (SOSEX) correction to G0W0. We also describe the implementation of the self-consistent Coulomb hole with screened exchange method (COHSEX), which serves as one of the mean-field starting points. The best performers overall are G0W0+SOSEX and G0W0 based on an IP-tuned long-range corrected hybrid functional with the former being more accurate for EAs and the latter for IPs. Both provide a balanced treatment of localized vs delocalized states and valence spectra in good agreement with photoemission spectroscopy (PES) experiments. PMID:26731609

  2. A Statistical Method for Assessing Peptide Identification Confidence in Accurate Mass and Time Tag Proteomics

    SciTech Connect

    Stanley, Jeffrey R.; Adkins, Joshua N.; Slysz, Gordon W.; Monroe, Matthew E.; Purvine, Samuel O.; Karpievitch, Yuliya V.; Anderson, Gordon A.; Smith, Richard D.; Dabney, Alan R.

    2011-07-15

    High-throughput proteomics is rapidly evolving to require high mass measurement accuracy for a variety of different applications. Increased mass measurement accuracy in bottom-up proteomics specifically allows for an improved ability to distinguish and characterize detected MS features, which may in turn be identified by, e.g., matching to entries in a database for both precursor and fragmentation mass identification methods. Many tools exist with which to score the identification of peptides from LC-MS/MS measurements or to assess matches to an accurate mass and time (AMT) tag database, but these two calculations remain distinctly unrelated. Here we present a statistical method, Statistical Tools for AMT tag Confidence (STAC), which extends our previous work incorporating prior probabilities of correct sequence identification from LC-MS/MS, as well as the quality with which LC-MS features match AMT tags, to evaluate peptide identification confidence. Compared to existing tools, we are able to obtain significantly more high-confidence peptide identifications at a given false discovery rate and additionally assign confidence estimates to individual peptide identifications. Freely available software implementations of STAC are available in both command line and as a Windows graphical application.

  3. Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods

    DOEpatents

    Grossman, Mark W.; George, William A.

    1987-01-01

    A process for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H.sub.2 O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg.sub.2 Cl.sub.2. The method for doing this involves dissolving a precise amount of Hg.sub.2 Cl.sub.2 in an electrolyte solution comprised of concentrated HCl and H.sub.2 O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg.

  4. Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods

    DOEpatents

    Grossman, M.W.; George, W.A.

    1987-07-07

    A process is described for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H[sub 2]O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg[sub 2]Cl[sub 2]. The method for doing this involves dissolving a precise amount of Hg[sub 2]Cl[sub 2] in an electrolyte solution comprised of concentrated HCl and H[sub 2]O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg. 1 fig.

  5. Methods for accurate cold-chain temperature monitoring using digital data-logger thermometers

    NASA Astrophysics Data System (ADS)

    Chojnacky, M. J.; Miller, W. M.; Strouse, G. F.

    2013-09-01

    Complete and accurate records of vaccine temperature history are vital to preserving drug potency and patient safety. However, previously published vaccine storage and handling guidelines have failed to indicate a need for continuous temperature monitoring in vaccine storage refrigerators. We evaluated the performance of seven digital data logger models as candidates for continuous temperature monitoring of refrigerated vaccines, based on the following criteria: out-of-box performance and compliance with manufacturer accuracy specifications over the range of use; measurement stability over extended, continuous use; proper setup in a vaccine storage refrigerator so that measurements reflect liquid vaccine temperatures; and practical methods for end-user validation and establishing metrological traceability. Data loggers were tested using ice melting point checks and by comparison to calibrated thermocouples to characterize performance over 0 °C to 10 °C. We also monitored logger performance in a study designed to replicate the range of vaccine storage and environmental conditions encountered at provider offices. Based on the results of this study, the Centers for Disease Control released new guidelines on proper methods for storage, handling, and temperature monitoring of vaccines for participants in its federally-funded Vaccines for Children Program. Improved temperature monitoring practices will ultimately decrease waste from damaged vaccines, improve consumer confidence, and increase effective inoculation rates.

  6. Accurate method to study static volume-pressure relationships in small fetal and neonatal animals.

    PubMed

    Suen, H C; Losty, P D; Donahoe, P K; Schnitzer, J J

    1994-08-01

    We designed an accurate method to study respiratory static volume-pressure relationships in small fetal and neonatal animals on the basis of Archimedes' principle. Our method eliminates the error caused by the compressibility of air (Boyle's law) and is sensitive to a volume change of as little as 1 microliters. Fetal and neonatal rats during the period of rapid lung development from day 19.5 of gestation (term = day 22) to day 3.5 postnatum were studied. The absolute lung volume at a transrespiratory pressure of 30-40 cmH2O increased 28-fold from 0.036 +/- 0.006 (SE) to 0.994 +/- 0.042 ml, the volume per gram of lung increased 14-fold from 0.39 +/- 0.07 to 5.59 +/- 0.66 ml/g, compliance increased 12-fold from 2.3 +/- 0.4 to 27.3 +/- 2.7 microliters/cmH2O, and specific compliance increased 6-fold from 24.9 +/- 4.5 to 152.3 +/- 22.8 microliters.cmH2O-1.g lung-1. This technique, which allowed us to compare changes during late gestation and the early neonatal period in small rodents, can be used to monitor and evaluate pulmonary functional changes after in utero pharmacological therapies in experimentally induced abnormalities such as pulmonary hypoplasia, surfactant deficiency, and congenital diaphragmatic hernia. PMID:8002489

  7. Accurate computation of surface stresses and forces with immersed boundary methods

    NASA Astrophysics Data System (ADS)

    Goza, Andres; Liska, Sebastian; Morley, Benjamin; Colonius, Tim

    2016-09-01

    Many immersed boundary methods solve for surface stresses that impose the velocity boundary conditions on an immersed body. These surface stresses may contain spurious oscillations that make them ill-suited for representing the physical surface stresses on the body. Moreover, these inaccurate stresses often lead to unphysical oscillations in the history of integrated surface forces such as the coefficient of lift. While the errors in the surface stresses and forces do not necessarily affect the convergence of the velocity field, it is desirable, especially in fluid-structure interaction problems, to obtain smooth and convergent stress distributions on the surface. To this end, we show that the equation for the surface stresses is an integral equation of the first kind whose ill-posedness is the source of spurious oscillations in the stresses. We also demonstrate that for sufficiently smooth delta functions, the oscillations may be filtered out to obtain physically accurate surface stresses. The filtering is applied as a post-processing procedure, so that the convergence of the velocity field is unaffected. We demonstrate the efficacy of the method by computing stresses and forces that converge to the physical stresses and forces for several test problems.

  8. Methods for accurate estimation of net discharge in a tidal channel

    USGS Publications Warehouse

    Simpson, M.R.; Bland, R.

    2000-01-01

    Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three

  9. SIESTA-PEXSI: Massively parallel method for efficient and accurate ab initio materials simulation

    NASA Astrophysics Data System (ADS)

    Lin, Lin; Huhs, Georg; Garcia, Alberto; Yang, Chao

    2014-03-01

    We describe how to combine the pole expansion and selected inversion (PEXSI) technique with the SIESTA method, which uses numerical atomic orbitals for Kohn-Sham density functional theory (KSDFT) calculations. The PEXSI technique can efficiently utilize the sparsity pattern of the Hamiltonian matrix and the overlap matrix generated from codes such as SIESTA, and solves KSDFT without using cubic scaling matrix diagonalization procedure. The complexity of PEXSI scales at most quadratically with respect to the system size, and the accuracy is comparable to that obtained from full diagonalization. One distinct feature of PEXSI is that it achieves low order scaling without using the near-sightedness property and can be therefore applied to metals as well as insulators and semiconductors, at room temperature or even lower temperature. The PEXSI method is highly scalable, and the recently developed massively parallel PEXSI technique can make efficient usage of 10,000 ~100,000 processors on high performance machines. We demonstrate the performance the SIESTA-PEXSI method using several examples for large scale electronic structure calculation including long DNA chain and graphene-like structures with more than 20000 atoms. Funded by Luis Alvarez fellowship in LBNL, and DOE SciDAC project in partnership with BES.

  10. Dielectric Boundary Forces in Numerical Poisson-Boltzmann Methods: Theory and Numerical Strategies.

    PubMed

    Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray

    2011-10-01

    Continuum modeling of electrostatic interactions based upon the numerical solutions of the Poisson-Boltzmann equation has been widely adopted in biomolecular applications. To extend their applications to molecular dynamics and energy minimization, robust and efficient methodologies to compute solvation forces must be developed. In this study, we have first reviewed the theory for the computation of dielectric boundary forces based on the definition of the Maxwell stress tensor. This is followed by a new formulation of the dielectric boundary force suitable for the finite-difference Poisson-Boltzmann methods. We have validated the new formulation with idealized analytical systems and realistic molecular systems. PMID:22125339

  11. Dielectric boundary force in numerical Poisson-Boltzmann methods: Theory and numerical strategies

    NASA Astrophysics Data System (ADS)

    Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray

    2011-10-01

    Continuum modeling of electrostatic interactions based upon the numerical solutions of the Poisson-Boltzmann equation has been widely adopted in biomolecular applications. To extend their applications to molecular dynamics and energy minimization, robust and efficient methodologies to compute solvation forces must be developed. In this study, we have first reviewed the theory for the computation of dielectric boundary force based on the definition of the Maxwell stress tensor. This is followed by a new formulation of the dielectric boundary force suitable for the finite-difference Poisson-Boltzmann methods. We have validated the new formulation with idealized analytical systems and realistic molecular systems.

  12. Numerical Method for Darcy Flow Derived Using Discrete Exterior Calculus

    NASA Astrophysics Data System (ADS)

    Hirani, A. N.; Nakshatrala, K. B.; Chaudhry, J. H.

    2015-05-01

    We derive a numerical method for Darcy flow, and also for Poisson's equation in mixed (first order) form, based on discrete exterior calculus (DEC). Exterior calculus is a generalization of vector calculus to smooth manifolds and DEC is one of its discretizations on simplicial complexes such as triangle and tetrahedral meshes. DEC is a coordinate invariant discretization, in that it does not depend on the embedding of the simplices or the whole mesh. We start by rewriting the governing equations of Darcy flow using the language of exterior calculus. This yields a formulation in terms of flux differential form and pressure. The numerical method is then derived by using the framework provided by DEC for discretizing differential forms and operators that act on forms. We also develop a discretization for a spatially dependent Hodge star that varies with the permeability of the medium. This also allows us to address discontinuous permeability. The matrix representation for our discrete non-homogeneous Hodge star is diagonal, with positive diagonal entries. The resulting linear system of equations for flux and pressure are saddle type, with a diagonal matrix as the top left block. The performance of the proposed numerical method is illustrated on many standard test problems. These include patch tests in two and three dimensions, comparison with analytically known solutions in two dimensions, layered medium with alternating permeability values, and a test with a change in permeability along the flow direction. We also show numerical evidence of convergence of the flux and the pressure. A convergence experiment is included for Darcy flow on a surface. A short introduction to the relevant parts of smooth and discrete exterior calculus is included in this article. We also include a discussion of the boundary condition in terms of exterior calculus.

  13. High-performance Integrated numerical methods for Two-phase Flow in Heterogeneous Porous Media

    NASA Astrophysics Data System (ADS)

    Chueh, Chih-Che; Djilali, Ned; Bangerth, Wolfgang

    2010-11-01

    Modelling of two-phase flow in heterogeneous porous media has been playing a decisive role in a variety of areas. However, how to efficiently and accurately solve the governing equation in the flow in porous media remains a challenge. In order to ensure the accurate representative flow field and simultaneously increase the computational efficiency, we incorporate a number of state-of-the-art techniques into a numerical framework on which more complicated models in the field of multi-phase flow in porous media will be based. Such a numerical framework consists of a h-adaptive refinement method, an entropy-based artificial diffusive term, a new adaptive operator splitting method and efficient preconditioners. In particular, it is emphasized that we propose a new efficient adaptive operator splitting to avoid solving a time-consuming pressure-velocity part every saturation time step and, most importantly, we also provide a theoretically numerical analysis as well as proof. A few benchmarks will be demonstrated in the presentation.

  14. Teaching Thermal Hydraulics & Numerical Methods: An Introductory Control Volume Primer

    SciTech Connect

    Lucas, D.S.

    2004-10-03

    This paper covers the basics of the implementation of the control volume method in the context of the Homogeneous Equilibrium Model (HEM)(T/H) code using the conservation equations of mass, momentum, and energy. This primer uses the advection equation as a template. The discussion will cover the basic equations of the control volume portion of the course in the primer, which includes the advection equation, numerical methods, along with the implementation of the various equations via FORTRAN into computer programs and the final result for a three equation HEM code and its validation.

  15. Numerical methods for scattering from electrically large objects

    NASA Astrophysics Data System (ADS)

    Enguist, Bjorn; Murphy, W. D.; Rokhlin, Vladimir; Vassiliou, Marius S.

    1991-05-01

    A new and computationally very efficient integral equation numerical method for computing electromagnetic scattering and radar cross section (RCS) was developed. A theory of higher order impedance boundary conditions was derived to handle single and multiple dielectric coatings around conductors. The method was tested in two dimensions using a 14,000-line FORTRAN program and was found to be very promising for electrically large objects. Initial ideas for extensions to three dimensions were explored. Treatments of trailing edge and corner singularities were developed.

  16. Integrated numerical methods for hypersonic aircraft cooling systems analysis

    NASA Technical Reports Server (NTRS)

    Petley, Dennis H.; Jones, Stuart C.; Dziedzic, William M.

    1992-01-01

    Numerical methods have been developed for the analysis of hypersonic aircraft cooling systems. A general purpose finite difference thermal analysis code is used to determine areas which must be cooled. Complex cooling networks of series and parallel flow can be analyzed using a finite difference computer program. Both internal fluid flow and heat transfer are analyzed, because increased heat flow causes a decrease in the flow of the coolant. The steady state solution is a successive point iterative method. The transient analysis uses implicit forward-backward differencing. Several examples of the use of the program in studies of hypersonic aircraft and rockets are provided.

  17. Left Ventricular Flow Analysis: Recent Advances in Numerical Methods and Applications in Cardiac Ultrasound

    PubMed Central

    Borazjani, Iman; Westerdale, John; McMahon, Eileen M.; Rajaraman, Prathish K.; Heys, Jeffrey J.

    2013-01-01

    The left ventricle (LV) pumps oxygenated blood from the lungs to the rest of the body through systemic circulation. The efficiency of such a pumping function is dependent on blood flow within the LV chamber. It is therefore crucial to accurately characterize LV hemodynamics. Improved understanding of LV hemodynamics is expected to provide important clinical diagnostic and prognostic information. We review the recent advances in numerical and experimental methods for characterizing LV flows and focus on analysis of intraventricular flow fields by echocardiographic particle image velocimetry (echo-PIV), due to its potential for broad and practical utility. Future research directions to advance patient-specific LV simulations include development of methods capable of resolving heart valves, higher temporal resolution, automated generation of three-dimensional (3D) geometry, and incorporating actual flow measurements into the numerical solution of the 3D cardiovascular fluid dynamics. PMID:23690874

  18. Numerical methods for the Poisson-Fermi equation in electrolytes

    NASA Astrophysics Data System (ADS)

    Liu, Jinn-Liang

    2013-08-01

    The Poisson-Fermi equation proposed by Bazant, Storey, and Kornyshev [Phys. Rev. Lett. 106 (2011) 046102] for ionic liquids is applied to and numerically studied for electrolytes and biological ion channels in three-dimensional space. This is a fourth-order nonlinear PDE that deals with both steric and correlation effects of all ions and solvent molecules involved in a model system. The Fermi distribution follows from classical lattice models of configurational entropy of finite size ions and solvent molecules and hence prevents the long and outstanding problem of unphysical divergence predicted by the Gouy-Chapman model at large potentials due to the Boltzmann distribution of point charges. The equation reduces to Poisson-Boltzmann if the correlation length vanishes. A simplified matched interface and boundary method exhibiting optimal convergence is first developed for this equation by using a gramicidin A channel model that illustrates challenging issues associated with the geometric singularities of molecular surfaces of channel proteins in realistic 3D simulations. Various numerical methods then follow to tackle a range of numerical problems concerning the fourth-order term, nonlinearity, stability, efficiency, and effectiveness. The most significant feature of the Poisson-Fermi equation, namely, its inclusion of steric and correlation effects, is demonstrated by showing good agreement with Monte Carlo simulation data for a charged wall model and an L type calcium channel model.

  19. Optimization methods and silicon solar cell numerical models

    NASA Technical Reports Server (NTRS)

    Girardini, K.; Jacobsen, S. E.

    1986-01-01

    An optimization algorithm for use with numerical silicon solar cell models was developed. By coupling an optimization algorithm with a solar cell model, it is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junction depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm was developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAP1D). SCAP1D uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the performance of a solar cell. A major obstacle is that the numerical methods used in SCAP1D require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the values associated with the maximum efficiency. This problem was alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution.

  20. Numerical integration of population models satisfying conservation laws: NSFD methods.

    PubMed

    Mickens, Ronald E

    2007-10-01

    Population models arising in ecology, epidemiology and mathematical biology may involve a conservation law, i.e. the total population is constant. In addition to these cases, other situations may occur for which the total population, asymptotically in time, approach a constant value. Since it is rarely the situation that the equations of motion can be analytically solved to obtain exact solutions, it follows that numerical techniques are needed to provide solutions. However, numerical procedures are only valid if they can reproduce fundamental properties of the differential equations modeling the phenomena of interest. We show that for population models, involving a dynamical conservation law the use of nonstandard finite difference (NSFD) methods allows the construction of discretization schemes such that they are dynamically consistent (DC) with the original differential equations. The paper will briefly discuss the NSFD methodology, the concept of DC, and illustrate their application to specific problems for population models. PMID:22876826

  1. A second-order accurate immersed boundary-lattice Boltzmann method for particle-laden flows

    SciTech Connect

    Zhou, Qiang; Fan, Liang-Shih

    2014-07-01

    . This finding may lead to more comprehensive studies of the effect of the particle rotation on fluid–solid drag laws. It is also demonstrated that, when the third-order or the fourth-order Runge–Kutta scheme is used, the numerical stability of the present IB-LBM is better than that of all methods in the literature, including the previous IB-LBMs and also the methods with the combination of the IBM and the traditional incompressible Navier–Stokes solver. - Highlights: • The IBM is embedded in the LBM using Runge–Kutta time schemes. • The effectiveness of the present IB-LBM is validated by benchmark applications. • For the first time, the IB-LBM achieves the second-order accuracy. • The numerical stability of the present IB-LBM is better than previous methods.

  2. Time-dependent corona models - A numerical method

    NASA Astrophysics Data System (ADS)

    Korevaar, P.; van Leer, B.

    1988-07-01

    A time-dependent numerical method for calculating gas flows is described. The method is implicit and especially suitable for finding stationary flow solutions. Although the method is general in its application to ideal compressible fluids, this paper applies it to a stellar atmosphere, heated to coronal temperatures by dissipation of mechanical energy. The integration scheme is based on conservative upwind spatial differencing. The upwind switching is provided by Van Leer's method of differentiable flux-splitting. It is shown that the code can handle large differences in density: up to 14 orders of magnitude. Special attention is paid to the boundary conditions, which are made completely transparent to disturbances. Besides some test-results, converged solutions for various values of the initial mechanical flux are presented which are in good agreement with previous time-independent calculations.

  3. Simple and efficient methods for the accurate evaluation of patterning effects in ultrafast photonic switches.

    PubMed

    Xu, Jing; Ding, Yunhong; Peucheret, Christophe; Xue, Weiqi; Seoane, Jorge; Zsigri, Beáta; Jeppesen, Palle; Mørk, Jesper

    2011-01-01

    Although patterning effects (PEs) are known to be a limiting factor of ultrafast photonic switches based on semiconductor optical amplifiers (SOAs), a simple approach for their evaluation in numerical simulations and experiments is missing. In this work, we experimentally investigate and verify a theoretical prediction of the pseudo random binary sequence (PRBS) length needed to capture the full impact of PEs. A wide range of SOAs and operation conditions are investigated. The very simple form of the PRBS length condition highlights the role of two parameters, i.e. the recovery time of the SOAs as well as the operation bit rate. Furthermore, a simple and effective method for probing the maximum PEs is demonstrated, which may relieve the computational effort or the experimental difficulties associated with the use of long PRBSs for the simulation or characterization of SOA-based switches. Good agrement with conventional PRBS characterization is obtained. The method is suitable for quick and systematic estimation and optimization of the switching performance. PMID:21263552

  4. Advanced numerical methods in mesh generation and mesh adaptation

    SciTech Connect

    Lipnikov, Konstantine; Danilov, A; Vassilevski, Y; Agonzal, A

    2010-01-01

    Numerical solution of partial differential equations requires appropriate meshes, efficient solvers and robust and reliable error estimates. Generation of high-quality meshes for complex engineering models is a non-trivial task. This task is made more difficult when the mesh has to be adapted to a problem solution. This article is focused on a synergistic approach to the mesh generation and mesh adaptation, where best properties of various mesh generation methods are combined to build efficiently simplicial meshes. First, the advancing front technique (AFT) is combined with the incremental Delaunay triangulation (DT) to build an initial mesh. Second, the metric-based mesh adaptation (MBA) method is employed to improve quality of the generated mesh and/or to adapt it to a problem solution. We demonstrate with numerical experiments that combination of all three methods is required for robust meshing of complex engineering models. The key to successful mesh generation is the high-quality of the triangles in the initial front. We use a black-box technique to improve surface meshes exported from an unattainable CAD system. The initial surface mesh is refined into a shape-regular triangulation which approximates the boundary with the same accuracy as the CAD mesh. The DT method adds robustness to the AFT. The resulting mesh is topologically correct but may contain a few slivers. The MBA uses seven local operations to modify the mesh topology. It improves significantly the mesh quality. The MBA method is also used to adapt the mesh to a problem solution to minimize computational resources required for solving the problem. The MBA has a solid theoretical background. In the first two experiments, we consider the convection-diffusion and elasticity problems. We demonstrate the optimal reduction rate of the discretization error on a sequence of adaptive strongly anisotropic meshes. The key element of the MBA method is construction of a tensor metric from hierarchical edge

  5. Accurate reliability analysis method for quantum-dot cellular automata circuits

    NASA Astrophysics Data System (ADS)

    Cui, Huanqing; Cai, Li; Wang, Sen; Liu, Xiaoqiang; Yang, Xiaokuo

    2015-10-01

    Probabilistic transfer matrix (PTM) is a widely used model in the reliability research of circuits. However, PTM model cannot reflect the impact of input signals on reliability, so it does not completely conform to the mechanism of the novel field-coupled nanoelectronic device which is called quantum-dot cellular automata (QCA). It is difficult to get accurate results when PTM model is used to analyze the reliability of QCA circuits. To solve this problem, we present the fault tree models of QCA fundamental devices according to different input signals. After that, the binary decision diagram (BDD) is used to quantitatively investigate the reliability of two QCA XOR gates depending on the presented models. By employing the fault tree models, the impact of input signals on reliability can be identified clearly and the crucial components of a circuit can be found out precisely based on the importance values (IVs) of components. So this method is contributive to the construction of reliable QCA circuits.

  6. Accurate methods for computing inviscid and viscous Kelvin-Helmholtz instability

    NASA Astrophysics Data System (ADS)

    Chen, Michael J.; Forbes, Lawrence K.

    2011-02-01

    The Kelvin-Helmholtz instability is modelled for inviscid and viscous fluids. Here, two bounded fluid layers flow parallel to each other with the interface between them growing in an unstable fashion when subjected to a small perturbation. In the various configurations of this problem, and the related problem of the vortex sheet, there are several phenomena associated with the evolution of the interface; notably the formation of a finite time curvature singularity and the ‘roll-up' of the interface. Two contrasting computational schemes will be presented. A spectral method is used to follow the evolution of the interface in the inviscid version of the problem. This allows the interface shape to be computed up to the time that a curvature singularity forms, with several computational difficulties overcome to reach that point. A weakly compressible viscous version of the problem is studied using finite difference techniques and a vorticity-streamfunction formulation. The two versions have comparable, but not identical, initial conditions and so the results exhibit some differences in timing. By including a small amount of viscosity the interface may be followed to the point that it rolls up into a classic ‘cat's-eye' shape. Particular attention was given to computing a consistent initial condition and solving the continuity equation both accurately and efficiently.

  7. Method for accurate sizing of pulmonary vessels from 3D medical images

    NASA Astrophysics Data System (ADS)

    O'Dell, Walter G.

    2015-03-01

    Detailed characterization of vascular anatomy, in particular the quantification of changes in the distribution of vessel sizes and of vascular pruning, is essential for the diagnosis and management of a variety of pulmonary vascular diseases and for the care of cancer survivors who have received radiation to the thorax. Clinical estimates of vessel radii are typically based on setting a pixel intensity threshold and counting how many "On" pixels are present across the vessel cross-section. A more objective approach introduced recently involves fitting the image with a library of spherical Gaussian filters and utilizing the size of the best matching filter as the estimate of vessel diameter. However, both these approaches have significant accuracy limitations including mis-match between a Gaussian intensity distribution and that of real vessels. Here we introduce and demonstrate a novel approach for accurate vessel sizing using 3D appearance models of a tubular structure along a curvilinear trajectory in 3D space. The vessel branch trajectories are represented with cubic Hermite splines and the tubular branch surfaces represented as a finite element surface mesh. An iterative parameter adjustment scheme is employed to optimally match the appearance models to a patient's chest X-ray computed tomography (CT) scan to generate estimates for branch radii and trajectories with subpixel resolution. The method is demonstrated on pulmonary vasculature in an adult human CT scan, and on 2D simulated test cases.

  8. Novel Parallel Numerical Methods for Radiation& Neutron Transport

    SciTech Connect

    Brown, P N

    2001-03-06

    In many of the multiphysics simulations performed at LLNL, transport calculations can take up 30 to 50% of the total run time. If Monte Carlo methods are used, the percentage can be as high as 80%. Thus, a significant core competence in the formulation, software implementation, and solution of the numerical problems arising in transport modeling is essential to Laboratory and DOE research. In this project, we worked on developing scalable solution methods for the equations that model the transport of photons and neutrons through materials. Our goal was to reduce the transport solve time in these simulations by means of more advanced numerical methods and their parallel implementations. These methods must be scalable, that is, the time to solution must remain constant as the problem size grows and additional computer resources are used. For iterative methods, scalability requires that (1) the number of iterations to reach convergence is independent of problem size, and (2) that the computational cost grows linearly with problem size. We focused on deterministic approaches to transport, building on our earlier work in which we performed a new, detailed analysis of some existing transport methods and developed new approaches. The Boltzmann equation (the underlying equation to be solved) and various solution methods have been developed over many years. Consequently, many laboratory codes are based on these methods, which are in some cases decades old. For the transport of x-rays through partially ionized plasmas in local thermodynamic equilibrium, the transport equation is coupled to nonlinear diffusion equations for the electron and ion temperatures via the highly nonlinear Planck function. We investigated the suitability of traditional-solution approaches to transport on terascale architectures and also designed new scalable algorithms; in some cases, we investigated hybrid approaches that combined both.

  9. A second-order accurate immersed boundary-lattice Boltzmann method for particle-laden flows

    NASA Astrophysics Data System (ADS)

    Zhou, Qiang; Fan, Liang-Shih

    2014-07-01

    may lead to more comprehensive studies of the effect of the particle rotation on fluid-solid drag laws. It is also demonstrated that, when the third-order or the fourth-order Runge-Kutta scheme is used, the numerical stability of the present IB-LBM is better than that of all methods in the literature, including the previous IB-LBMs and also the methods with the combination of the IBM and the traditional incompressible Navier-Stokes solver.

  10. Fast and accurate global multiphase arrival tracking: the irregular shortest-path method in a 3-D spherical earth model

    NASA Astrophysics Data System (ADS)

    Huang, Guo-Jiao; Bai, Chao-Ying; Greenhalgh, Stewart

    2013-09-01

    The traditional grid/cell-based wavefront expansion algorithms, such as the shortest path algorithm, can only find the first arrivals or multiply reflected (or mode converted) waves transmitted from subsurface interfaces, but cannot calculate the other later reflections/conversions having a minimax time path. In order to overcome the above limitations, we introduce the concept of a stationary minimax time path of Fermat's Principle into the multistage irregular shortest path method. Here we extend it from Cartesian coordinates for a flat earth model to global ray tracing of multiple phases in a 3-D complex spherical earth model. The ray tracing results for 49 different kinds of crustal, mantle and core phases show that the maximum absolute traveltime error is less than 0.12 s and the average absolute traveltime error is within 0.09 s when compared with the AK135 theoretical traveltime tables for a 1-D reference model. Numerical tests in terms of computational accuracy and CPU time consumption indicate that the new scheme is an accurate, efficient and a practical way to perform 3-D multiphase arrival tracking in regional or global traveltime tomography.

  11. Computation of Nonlinear Backscattering Using a High-Order Numerical Method

    NASA Technical Reports Server (NTRS)

    Fibich, G.; Ilan, B.; Tsynkov, S.

    2001-01-01

    The nonlinear Schrodinger equation (NLS) is the standard model for propagation of intense laser beams in Kerr media. The NLS is derived from the nonlinear Helmholtz equation (NLH) by employing the paraxial approximation and neglecting the backscattered waves. In this study we use a fourth-order finite-difference method supplemented by special two-way artificial boundary conditions (ABCs) to solve the NLH as a boundary value problem. Our numerical methodology allows for a direct comparison of the NLH and NLS models and for an accurate quantitative assessment of the backscattered signal.

  12. An automated, fast and accurate registration method to link stranded seeds in permanent prostate implants.

    PubMed

    Westendorp, Hendrik; Nuver, Tonnis T; Moerland, Marinus A; Minken, André W

    2015-10-21

    The geometry of a permanent prostate implant varies over time. Seeds can migrate and edema of the prostate affects the position of seeds. Seed movements directly influence dosimetry which relates to treatment quality. We present a method that tracks all individual seeds over time allowing quantification of seed movements. This linking procedure was tested on transrectal ultrasound (TRUS) and cone-beam CT (CBCT) datasets of 699 patients. These datasets were acquired intraoperatively during a dynamic implantation procedure, that combines both imaging modalities. The procedure was subdivided in four automatic linking steps. (I) The Hungarian Algorithm was applied to initially link seeds in CBCT and the corresponding TRUS datasets. (II) Strands were identified and optimized based on curvature and linefits: non optimal links were removed. (III) The positions of unlinked seeds were reviewed and were linked to incomplete strands if within curvature- and distance-thresholds. (IV) Finally, seeds close to strands were linked, also if the curvature-threshold was violated. After linking the seeds an affine transformation was applied. The procedure was repeated until the results were stable or the 6th iteration ended. All results were visually reviewed for mismatches and uncertainties. Eleven implants showed a mismatch and in 12 cases an uncertainty was identified. On average the linking procedure took 42 ms per case. This accurate and fast method has the potential to be used for other time spans, like Day 30, and other imaging modalities. It can potentially be used during a dynamic implantation procedure to faster and better evaluate the quality of the permanent prostate implant. PMID:26439900

  13. An automated, fast and accurate registration method to link stranded seeds in permanent prostate implants

    NASA Astrophysics Data System (ADS)

    Westendorp, Hendrik; Nuver, Tonnis T.; Moerland, Marinus A.; Minken, André W.

    2015-10-01

    The geometry of a permanent prostate implant varies over time. Seeds can migrate and edema of the prostate affects the position of seeds. Seed movements directly influence dosimetry which relates to treatment quality. We present a method that tracks all individual seeds over time allowing quantification of seed movements. This linking procedure was tested on transrectal ultrasound (TRUS) and cone-beam CT (CBCT) datasets of 699 patients. These datasets were acquired intraoperatively during a dynamic implantation procedure, that combines both imaging modalities. The procedure was subdivided in four automatic linking steps. (I) The Hungarian Algorithm was applied to initially link seeds in CBCT and the corresponding TRUS datasets. (II) Strands were identified and optimized based on curvature and linefits: non optimal links were removed. (III) The positions of unlinked seeds were reviewed and were linked to incomplete strands if within curvature- and distance-thresholds. (IV) Finally, seeds close to strands were linked, also if the curvature-threshold was violated. After linking the seeds an affine transformation was applied. The procedure was repeated until the results were stable or the 6th iteration ended. All results were visually reviewed for mismatches and uncertainties. Eleven implants showed a mismatch and in 12 cases an uncertainty was identified. On average the linking procedure took 42 ms per case. This accurate and fast method has the potential to be used for other time spans, like Day 30, and other imaging modalities. It can potentially be used during a dynamic implantation procedure to faster and better evaluate the quality of the permanent prostate implant.

  14. An efficient numerical method for computing dynamics of spin F = 2 Bose-Einstein condensates

    SciTech Connect

    Wang Hanquan

    2011-07-01

    In this paper, we extend the efficient time-splitting Fourier pseudospectral method to solve the generalized Gross-Pitaevskii (GP) equations, which model the dynamics of spin F = 2 Bose-Einstein condensates at extremely low temperature. Using the time-splitting technique, we split the generalized GP equations into one linear part and two nonlinear parts: the linear part is solved with the Fourier pseudospectral method; one of nonlinear parts is solved analytically while the other one is reformulated into a matrix formulation and solved by diagonalization. We show that the method keeps well the conservation laws related to generalized GP equations in 1D and 2D. We also show that the method is of second-order in time and spectrally accurate in space through a one-dimensional numerical test. We apply the method to investigate the dynamics of spin F = 2 Bose-Einstein condensates confined in a uniform/nonuniform magnetic field.

  15. PLIF: A rapid, accurate method to detect and quantitatively assess protein-lipid interactions.

    PubMed

    Ceccato, Laurie; Chicanne, Gaëtan; Nahoum, Virginie; Pons, Véronique; Payrastre, Bernard; Gaits-Iacovoni, Frédérique; Viaud, Julien

    2016-01-01

    Phosphoinositides are a type of cellular phospholipid that regulate signaling in a wide range of cellular and physiological processes through the interaction between their phosphorylated inositol head group and specific domains in various cytosolic proteins. These lipids also influence the activity of transmembrane proteins. Aberrant phosphoinositide signaling is associated with numerous diseases, including cancer, obesity, and diabetes. Thus, identifying phosphoinositide-binding partners and the aspects that define their specificity can direct drug development. However, current methods are costly, time-consuming, or technically challenging and inaccessible to many laboratories. We developed a method called PLIF (for "protein-lipid interaction by fluorescence") that uses fluorescently labeled liposomes and tethered, tagged proteins or peptides to enable fast and reliable determination of protein domain specificity for given phosphoinositides in a membrane environment. We validated PLIF against previously known phosphoinositide-binding partners for various proteins and obtained relative affinity profiles. Moreover, PLIF analysis of the sorting nexin (SNX) family revealed not only that SNXs bound most strongly to phosphatidylinositol 3-phosphate (PtdIns3P or PI3P), which is known from analysis with other methods, but also that they interacted with other phosphoinositides, which had not previously been detected using other techniques. Different phosphoinositide partners, even those with relatively weak binding affinity, could account for the diverse functions of SNXs in vesicular trafficking and protein sorting. Because PLIF is sensitive, semiquantitative, and performed in a high-throughput manner, it may be used to screen for highly specific protein-lipid interaction inhibitors. PMID:27025878

  16. Comparison of four stable numerical methods for Abel's integral equation

    NASA Technical Reports Server (NTRS)

    Murio, Diego A.; Mejia, Carlos E.

    1991-01-01

    The 3-D image reconstruction from cone-beam projections in computerized tomography leads naturally, in the case of radial symmetry, to the study of Abel-type integral equations. If the experimental information is obtained from measured data, on a discrete set of points, special methods are needed in order to restore continuity with respect to the data. A new combined Regularized-Adjoint-Conjugate Gradient algorithm, together with two different implementations of the Mollification Method (one based on a data filtering technique and the other on the mollification of the kernal function) and a regularization by truncation method (initially proposed for 2-D ray sample schemes and more recently extended to 3-D cone-beam image reconstruction) are extensively tested and compared for accuracy and numerical stability as functions of the level of noise in the data.

  17. Numerical Analysis of a Finite Element/Volume Penalty Method

    NASA Astrophysics Data System (ADS)

    Maury, Bertrand

    The penalty method makes it possible to incorporate a large class of constraints in general purpose Finite Element solvers like freeFEM++. We present here some contributions to the numerical analysis of this method. We propose an abstract framework for this approach, together with some general error estimates based on the discretization parameter ɛ and the space discretization parameter h. As this work is motivated by the possibility to handle constraints like rigid motion for fluid-particle flows, we shall pay a special attention to a model problem of this kind, where the constraint is prescribed over a subdomain. We show how the abstract estimate can be applied to this situation, in the case where a non-body-fitted mesh is used. In addition, we describe how this method provides an approximation of the Lagrange multiplier associated to the constraint.

  18. Numerical methods for high-dimensional probability density function equations

    NASA Astrophysics Data System (ADS)

    Cho, H.; Venturi, D.; Karniadakis, G. E.

    2016-01-01

    In this paper we address the problem of computing the numerical solution to kinetic partial differential equations involving many phase variables. These types of equations arise naturally in many different areas of mathematical physics, e.g., in particle systems (Liouville and Boltzmann equations), stochastic dynamical systems (Fokker-Planck and Dostupov-Pugachev equations), random wave theory (Malakhov-Saichev equations) and coarse-grained stochastic systems (Mori-Zwanzig equations). We propose three different classes of new algorithms addressing high-dimensionality: The first one is based on separated series expansions resulting in a sequence of low-dimensional problems that can be solved recursively and in parallel by using alternating direction methods. The second class of algorithms relies on truncation of interaction in low-orders that resembles the Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) framework of kinetic gas theory and it yields a hierarchy of coupled probability density function equations. The third class of algorithms is based on high-dimensional model representations, e.g., the ANOVA method and probabilistic collocation methods. A common feature of all these approaches is that they are reducible to the problem of computing the solution to high-dimensional equations via a sequence of low-dimensional problems. The effectiveness of the new algorithms is demonstrated in numerical examples involving nonlinear stochastic dynamical systems and partial differential equations, with up to 120 variables.

  19. Improved numerical method for subchannel cross-flow calculations

    SciTech Connect

    Kaya, S.; Anghaie, S.

    1986-01-01

    COBRA-OSU is a fast running computer code for coupled kinetic and thermal-hydraulic analysis of nuclear reactor core subchannels, currently under development at Oregon State University. This code is a modified version of COBRA-IV with two major improved features. First, COBRA-OSU uses the Gaussian elimination method instead of Gauss-Seidel iteration for subchannel cross-flow calculation. Second, COBRA-OSU has an additional model for regionwise point reactor kinetics which includes all major feedback reactivity effects on calculation of the axial power profile during the course of a transient. This paper summarizes the improved numerical features of the COBRA-OSU code.

  20. Numerical divergence effects of equivalence theory in the nodal expansion method

    SciTech Connect

    Zika, M.R.; Downar, T.J. )

    1993-11-01

    Accurate solutions of the advanced nodal equations require the use of discontinuity factors (DFs) to account for the homogenization errors that are inherent in all coarse-mesh nodal methods. During the last several years, nodal equivalence theory (NET) has successfully been implemented for the Cartesian geometry and has received widespread acceptance in the light water reactor industry. The extension of NET to other reactor types has had limited success. Recent efforts to implement NET within the framework of the nodal expansion method have successfully been applied to the fast breeder reactor. However, attempts to apply the same methods to thermal reactors such as the Modular High-Temperature Gas Reactor (MHTGR) have led to numerical divergence problems that can be attributed directly to the magnitude of the DFs. In the work performed here, it was found that the numerical problems occur in the inner and upscatter iterations of the solution algorithm. These iterations use a Gauss-Seidel iterative technique that is always convergent for problems with unity DFs. However, for an MHTGR model that requires large DFs, both the inner and upscatter iterations were divergent. Initial investigations into methods for bounding the DFs have proven unsatisfactory as a means of remedying the convergence problems. Although the DFs could be bounded to yield a convergent solution, several cases were encountered where the resulting flux solution was less accurate than the solution without DFs. For the specific case of problems without upscattering, an alternate numerical method for the inner iteration, an LU decomposition, was identified and shown to be feasible.

  1. Numerical solution of flame sheet problems with and without multigrid methods

    NASA Technical Reports Server (NTRS)

    Douglas, Craig C.; Ern, Alexandre

    1993-01-01

    Flame sheet problems are on the natural route to the numerical solution of multidimensional flames, which, in turn, are important in many engineering applications. In order to model the structure of flames more accurately, we use the vorticity-velocity formulation of the fluid flow equations, as opposed to the streamfunction-vorticity approach. The numerical solution of the resulting nonlinear coupled elliptic partial differential equations involves a pseudo transient process and a steady state Newton iteration. Rather than working with dimensionless variables, we introduce scale factors that can yield significant savings in the execution time. In this context, we also investigate the applicability and performance of several multigrid methods, focusing on nonlinear damped Newton multigrid, using either one way or correction schemes.

  2. Optimization methods and silicon solar cell numerical models

    NASA Technical Reports Server (NTRS)

    Girardini, K.

    1986-01-01

    The goal of this project is the development of an optimization algorithm for use with a solar cell model. It is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junctions depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm has been developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAPID). SCAPID uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the operation of a solar cell. A major obstacle is that the numerical methods used in SCAPID require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the value associated with the maximum efficiency. This problem has been alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution. Adapting SCAPID so that it could be called iteratively by the optimization code provided another means of reducing the cpu time required to complete an optimization. Instead of calculating the entire I-V curve, as is usually done in SCAPID, only the efficiency is calculated (maximum power voltage and current) and the solution from previous calculations is used to initiate the next solution.

  3. Validation of a numerical method for unsteady flow calculations

    SciTech Connect

    Giles, M.; Haimes, R. . Dept. of Aeronautics and Astronautics)

    1993-01-01

    This paper describes and validates a numerical method for the calculation of unsteady inviscid and viscous flows. A companion paper compares experimental measurements of unsteady heat transfer on a transonic rotor with the corresponding computational results. The mathematical model is the Reynolds-averaged unsteady Navier-Stokes equations for a compressible ideal gas. Quasi-three-dimensionality is included through the use of a variable streamtube thickness. The numerical algorithm is unusual in two respects: (a) For reasons of efficiency and flexibility, it uses a hybrid Navier-Stokes/Euler method, and (b) to allow for the computation of stator/rotor combinations with arbitrary pitch ratio, a novel space-time coordinate transformation is used. Several test cases are presented to validate the performance of the computer program, UNSFLO. These include: (a) unsteady, inviscid flat plate cascade flows (b) steady and unsteady, viscous flat plate cascade flows, (c) steady turbine heat transfer and loss prediction. In the first two sets of cases comparisons are made with theory, and in the third the comparison is with experimental data.

  4. A Generalized Subspace Least Mean Square Method for High-resolution Accurate Estimation of Power System Oscillation Modes

    SciTech Connect

    Zhang, Peng; Zhou, Ning; Abdollahi, Ali

    2013-09-10

    A Generalized Subspace-Least Mean Square (GSLMS) method is presented for accurate and robust estimation of oscillation modes from exponentially damped power system signals. The method is based on orthogonality of signal and noise eigenvectors of the signal autocorrelation matrix. Performance of the proposed method is evaluated using Monte Carlo simulation and compared with Prony method. Test results show that the GSLMS is highly resilient to noise and significantly dominates Prony method in tracking power system modes under noisy environments.

  5. A Method for Deriving Accurate Gas-Phase Abundances for the Multiphase Interstellar Galactic Halo

    NASA Astrophysics Data System (ADS)

    Howk, J. Christopher; Sembach, Kenneth R.; Savage, Blair D.

    2006-01-01

    We describe a new method for accurately determining total gas-phase abundances for the Galactic halo interstellar medium with minimal ionization uncertainties. For sight lines toward globular clusters containing both ultraviolet-bright stars and radio pulsars, it is possible to measure column densities of H I and several ionization states of selected metals using ultraviolet absorption line measurements and of H II using radio dispersion measurements. By measuring the ionized hydrogen column, we minimize ionization uncertainties that plague abundance measurements of Galactic halo gas. We apply this method for the first time to the sight line toward the globular cluster Messier 3 [(l,b)=(42.2d,+78.7d), d=10.2 kpc, z=10.0 kpc] using Far Ultraviolet Spectroscopic Explorer and Hubble Space Telescope ultraviolet spectroscopy of the post-asymptotic giant branch star von Zeipel 1128 and radio observations by Ransom et al. of recently discovered millisecond pulsars. The fraction of hydrogen associated with ionized gas along this sight line is 45%+/-5%, with the warm (T~104 K) and hot (T>~105 K) ionized phases present in roughly a 5:1 ratio. This is the highest measured fraction of ionized hydrogen along a high-latitude pulsar sight line. We derive total gas-phase abundances logN(S)/N(H)=-4.87+/-0.03 and logN(Fe)/N(H)=-5.27+/-0.05. Our derived sulfur abundance is in excellent agreement with recent solar system determinations of Asplund, Grevesse, & Sauval. However, it is -0.14 dex below the solar system abundance typically adopted in studies of the interstellar medium. The iron abundance is ~-0.7 dex below the solar system abundance, consistent with the significant incorporation of iron into interstellar grains. Abundance estimates derived by simply comparing S II and Fe II to H I are +0.17 and +0.11 dex higher, respectively, than the abundance estimates derived from our refined approach. Ionization corrections to the gas-phase abundances measured in the standard way are

  6. Numerical modeling of spray combustion with an advanced VOF method

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Shang, Huan-Min; Shih, Ming-Hsin; Liaw, Paul

    1995-01-01

    This paper summarizes the technical development and validation of a multiphase computational fluid dynamics (CFD) numerical method using the volume-of-fluid (VOF) model and a Lagrangian tracking model which can be employed to analyze general multiphase flow problems with free surface mechanism. The gas-liquid interface mass, momentum and energy conservation relationships are modeled by continuum surface mechanisms. A new solution method is developed such that the present VOF model can be applied for all-speed flow regimes. The objectives of the present study are to develop and verify the fractional volume-of-fluid cell partitioning approach into a predictor-corrector algorithm and to demonstrate the effectiveness of the present approach by simulating benchmark problems including laminar impinging jets, shear coaxial jet atomization and shear coaxial spray combustion flows.

  7. A block interface flux reconstruction method for numerical simulation with high order finite difference scheme

    NASA Astrophysics Data System (ADS)

    Gao, Junhui

    2013-05-01

    Overlap grid is usually used in numerical simulation of flow with complex geometry by high order finite difference scheme. It is difficult to generate overlap grid and the connectivity information between adjacent blocks, especially when interpolation is required for non-coincident overlap grids. In this study, an interface flux reconstruction (IFR) method is proposed for numerical simulation using high order finite difference scheme with multi-block structured grids. In this method the neighboring blocks share a common face, and the fluxes on each block are matched to set the boundary conditions for each interior block. Therefore this method has the promise of allowing discontinuous grids on either side of an interior block interface. The proposed method is proven to be stable for 7-point central DRP scheme coupled with 4-point and 5-point boundary closure schemes, as well as the 4th order compact scheme coupled with 3rd order boundary closure scheme. Four problems are numerically solved with the developed code to validate the interface flux reconstruction method in this study. The IFR method coupled with the 4th order DRP scheme or compact scheme is validated to be 4th order accuracy with one and two dimensional waves propagation problems. Two dimensional pulse propagation in mean flow is computed with wavy mesh to demonstrate the ability of the proposed method for non-uniform grid. To demonstrate the ability of the proposed method for complex geometry, sound scattering by two cylinders is simulated and the numerical results are compared with the analytical data. It is shown that the numerical results agree well with the analytical data. Finally the IFR method is applied to simulate viscous flow pass a cylinder at Reynolds number 150 to show its capability for viscous problem. The computed pressure coefficient on the cylinder surface, the frequency of vortex shedding, the lift and drag coefficients are presented. The numerical results are compared with the data

  8. Asymmetric MRI magnet design using a hybrid numerical method.

    PubMed

    Zhao, H; Crozier, S; Doddrell, D M

    1999-12-01

    This paper describes a hybrid numerical method for the design of asymmetric magnetic resonance imaging magnet systems. The problem is formulated as a field synthesis and the desired current density on the surface of a cylinder is first calculated by solving a Fredholm equation of the first kind. Nonlinear optimization methods are then invoked to fit practical magnet coils to the desired current density. The field calculations are performed using a semi-analytical method. A new type of asymmetric magnet is proposed in this work. The asymmetric MRI magnet allows the diameter spherical imaging volume to be positioned close to one end of the magnet. The main advantages of making the magnet asymmetric include the potential to reduce the perception of claustrophobia for the patient, better access to the patient by attending physicians, and the potential for reduced peripheral nerve stimulation due to the gradient coil configuration. The results highlight that the method can be used to obtain an asymmetric MRI magnet structure and a very homogeneous magnetic field over the central imaging volume in clinical systems of approximately 1.2 m in length. Unshielded designs are the focus of this work. This method is flexible and may be applied to magnets of other geometries. PMID:10579958

  9. A method for improving time-stepping numerics

    NASA Astrophysics Data System (ADS)

    Williams, P. D.

    2012-04-01

    In contemporary numerical simulations of the atmosphere, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. This presentation will review the available evidence, and will then suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. The most common time-stepping method is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in the following atmospheric models (and many more): ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability in these models, it also introduces non-physical damping and reduces the accuracy. This presentation proposes a simple modification to the RA filter. The modification has become known as the RAW filter (Williams 2011). When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy. (The phase accuracy remains second-order.) The RAW filter can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. Results will be shown from recent implementations of the RAW filter in various atmospheric models, including SPEEDY and COSMO. For example, in SPEEDY, the skill of weather forecasts is found to be significantly improved. In particular, in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter (Amezcua, Kalnay & Williams 2011). These improvements are encouraging for the use of the RAW filter in other models.

  10. A time-accurate implicit method for chemical non-equilibrium flows at all speeds

    NASA Technical Reports Server (NTRS)

    Shuen, Jian-Shun

    1992-01-01

    A new time accurate coupled solution procedure for solving the chemical non-equilibrium Navier-Stokes equations over a wide range of Mach numbers is described. The scheme is shown to be very efficient and robust for flows with velocities ranging from M less than or equal to 10(exp -10) to supersonic speeds.

  11. A spectrally accurate method for overlapping grid solution of incompressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Merrill, Brandon E.; Peet, Yulia T.; Fischer, Paul F.; Lottes, James W.

    2016-02-01

    An overlapping mesh methodology that is spectrally accurate in space and up to third-order accurate in time is developed for solution of unsteady incompressible flow equations in three-dimensional domains. The ability to decompose a global domain into separate, but overlapping, subdomains eases mesh generation procedures and increases flexibility of modeling flows with complex geometries. The methodology employs implicit spectral element discretization of equations in each subdomain and explicit treatment of subdomain interfaces with spectrally-accurate spatial interpolation and high-order accurate temporal extrapolation, and requires few, if any, iterations, yet maintains the global accuracy and stability of the underlying flow solver. The overlapping mesh methodology is thoroughly validated using two-dimensional and three-dimensional benchmark problems in laminar and turbulent flows. The spatial and temporal convergence is documented and is in agreement with the nominal order of accuracy of the solver. The influence of long integration times, as well as inflow-outflow global boundary conditions on the performance of the overlapping grid solver is assessed. In a turbulent benchmark of fully-developed turbulent pipe flow, the turbulent statistics with the overlapping grids is validated against published available experimental and other computation data. Scaling tests are presented that show near linear strong scaling, even for moderately large processor counts.

  12. Libration Orbit Mission Design: Applications of Numerical & Dynamical Methods

    NASA Technical Reports Server (NTRS)

    Bauer, Frank (Technical Monitor); Folta, David; Beckman, Mark

    2002-01-01

    Sun-Earth libration point orbits serve as excellent locations for scientific investigations. These orbits are often selected to minimize environmental disturbances and maximize observing efficiency. Trajectory design in support of libration orbits is ever more challenging as more complex missions are envisioned in the next decade. Trajectory design software must be further enabled to incorporate better understanding of the libration orbit solution space and thus improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple libration missions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes algorithm and software development. The recently launched Microwave Anisotropy Probe (MAP) and upcoming James Webb Space Telescope (JWST) and Constellation-X missions are examples of the use of improved numerical methods for attaining constrained orbital parameters and controlling their dynamical evolution at the collinear libration points. This paper presents a history of libration point missions, a brief description of the numerical and dynamical design techniques including software used, and a sample of future GSFC mission designs.

  13. Unsaturated Shear Strength and Numerical Analysis Methods for Unsaturated Soils

    NASA Astrophysics Data System (ADS)

    Kim, D.; Kim, G.; Kim, D.; Baek, H.; Kang, S.

    2011-12-01

    The angles of shearing resistance(φb) and internal friction(φ') appear to be identical in low suction range, but the angle of shearing resistance shows non-linearity as suction increases. In most numerical analysis however, a fixed value for the angle of shearing resistance is applied even in low suction range for practical reasons, often leading to a false conclusion. In this study, a numerical analysis has been undertaken employing the estimated shear strength curve of unsaturated soils from the residual water content of SWCC proposed by Vanapalli et al.(1996). The result was also compared with that from a fixed value of φb. It is suggested that, in case it is difficult to measure the unsaturated shear strength curve through the triaxial soil tests, the estimated shear strength curve using the residual water content can be a useful alternative. This result was applied for analyzing the slope stablity of unsaturated soils. The effects of a continuous rainfall on slope stability were analyzed using a commercial program "SLOPE/W", with the coupled infiltration analysis program "SEEP/W" from the GEO-SLOPE International Ltd. The results show that, prior to the infiltration by the intensive rainfall, the safety factors using the estimated shear strength curve were substantially higher than that from the fixed value of φb at all time points. After the intensive infiltration, both methods showed a similar behavior.

  14. A Hybrid Numerical Analysis Method for Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Forth, Scott C.; Staroselsky, Alexander

    2001-01-01

    A new hybrid surface-integral-finite-element numerical scheme has been developed to model a three-dimensional crack propagating through a thin, multi-layered coating. The finite element method was used to model the physical state of the coating (far field), and the surface integral method was used to model the fatigue crack growth. The two formulations are coupled through the need to satisfy boundary conditions on the crack surface and the external boundary. The coupling is sufficiently weak that the surface integral mesh of the crack surface and the finite element mesh of the uncracked volume can be set up independently. Thus when modeling crack growth, the finite element mesh can remain fixed for the duration of the simulation as the crack mesh is advanced. This method was implemented to evaluate the feasibility of fabricating a structural health monitoring system for real-time detection of surface cracks propagating in engine components. In this work, the authors formulate the hybrid surface-integral-finite-element method and discuss the mechanical issues of implementing a structural health monitoring system in an aircraft engine environment.

  15. Numerical method of characteristics for one-dimensional blood flow

    NASA Astrophysics Data System (ADS)

    Acosta, Sebastian; Puelz, Charles; Rivière, Béatrice; Penny, Daniel J.; Rusin, Craig G.

    2015-08-01

    Mathematical modeling at the level of the full cardiovascular system requires the numerical approximation of solutions to a one-dimensional nonlinear hyperbolic system describing flow in a single vessel. This model is often simulated by computationally intensive methods like finite elements and discontinuous Galerkin, while some recent applications require more efficient approaches (e.g. for real-time clinical decision support, phenomena occurring over multiple cardiac cycles, iterative solutions to optimization/inverse problems, and uncertainty quantification). Further, the high speed of pressure waves in blood vessels greatly restricts the time step needed for stability in explicit schemes. We address both cost and stability by presenting an efficient and unconditionally stable method for approximating solutions to diagonal nonlinear hyperbolic systems. Theoretical analysis of the algorithm is given along with a comparison of our method to a discontinuous Galerkin implementation. Lastly, we demonstrate the utility of the proposed method by implementing it on small and large arterial networks of vessels whose elastic and geometrical parameters are physiologically relevant.

  16. Space-time adaptive numerical methods for geophysical applications.

    PubMed

    Castro, C E; Käser, M; Toro, E F

    2009-11-28

    In this paper we present high-order formulations of the finite volume and discontinuous Galerkin finite-element methods for wave propagation problems with a space-time adaptation technique using unstructured meshes in order to reduce computational cost without reducing accuracy. Both methods can be derived in a similar mathematical framework and are identical in their first-order version. In their extension to higher order accuracy in space and time, both methods use spatial polynomials of higher degree inside each element, a high-order solution of the generalized Riemann problem and a high-order time integration method based on the Taylor series expansion. The static adaptation strategy uses locally refined high-resolution meshes in areas with low wave speeds to improve the approximation quality. Furthermore, the time step length is chosen locally adaptive such that the solution is evolved explicitly in time by an optimal time step determined by a local stability criterion. After validating the numerical approach, both schemes are applied to geophysical wave propagation problems such as tsunami waves and seismic waves comparing the new approach with the classical global time-stepping technique. The problem of mesh partitioning for large-scale applications on multi-processor architectures is discussed and a new mesh partition approach is proposed and tested to further reduce computational cost. PMID:19840984

  17. Development of numerical methods to problems of micromechanics

    NASA Astrophysics Data System (ADS)

    Garcia-Martinez, Jose Ramon

    In this dissertation we utilize the finite element method to investigate three micromechanical problems. In Chapter 2, we study the compliance contribution tensor H of multiple branched cracks. The cracks grow from a deltoid pore at their center into a triple crack. For plain strain conditions, two-dimensional models of the branched crack are modeled and solved in ABAQUS. The displacement field over the surface of the branched crack and the deltoid is curve fitted to carry out the integral surface of the compliance contribution tensor H. The predicted values are in good agreement with analytical solution. In Chapter 3 a three-dimensional finite element program using an unaligned mesh with an eight-node isoparametric element is developed to study the compliance contribution tensor H of cavities with superellipsoid shapes. A mesh clustering algorithm to increase the number of elements inside and near the superellipsoid surface to obtain a mesh independent solution is used. The numerical results are compared with the analytical solution of a sphere; the error of the numerical approximation varied from 8 to 11%. It is found that the number of elements inside the superellipsoid are insufficient. An algorithm to mesh independently the volumes inside and outside the cube is proposed to increase the accuracy in the calculation of H. As n1 and n2 increase, the numerical solutions show that, H1111 → 0 and H2211 → 0. Although, for these concave shapes no analytical solution exists a bound of 0 for the terms H 1111 and H2211 is suggested. Finally, in Chapter 4 a numerical verification of the cross-property connection between the effective fluid permeability and the effective electrical conductivity is study. A molecular dynamics algorithm is used to generate a set of different microstructural patterns. The volumetric average over a cubic volume is used to obtain the effective electrical conductivity and the effective fluid permeability. The tortuosity of the porous phase

  18. A survey of numerical methods for shock physics applications

    SciTech Connect

    Hertel, E.S. Jr.

    1997-10-01

    Hydrocodes or more accurately, shock physics analysis packages, have been widely used in the US Department of Energy (DOE) laboratories and elsewhere around the world for over 30 years. Initial applications included weapons effects studies where the pressure levels were high enough to disregard the material strength, hence the term hydrocode. Over the last 30 years, Sandia has worked extensively to develop and apply advanced hydrocodes to armor/anti-armor interactions, warhead design, high explosive initiation, and nuclear weapon safety issues. The needs of the DOE have changed over the last 30 years, especially over the last decade. A much stronger emphasis is currently placed on the details of material deformation and high explosive initiation phenomena. The hydrocodes of 30 years ago have now evolved into sophisticated analysis tools that can replace testing in some situations and complement it in all situations. A brief history of the development of hydrocodes in the US will be given. The author also discusses and compares the four principal methods in use today for the solution of the conservation equations of mass, momentum, and energy for shock physics applications. The techniques discussed are the Eulerian methods currently employed by the Sandia multi-dimensional shock physics analysis package known as CTH; the element based Lagrangian method currently used by codes like DYNA; the element free Lagrangian method (also known as smooth particle hydrodynamics) used by codes like the Los Alamos code SPHINX; and the Arbitrary Lagrangian Eulerian methods used by codes like the Lawrence Livermore code CALE or the Sandia code ALEGRA.

  19. A fast numerical solution of scattering by a cylinder: Spectral method for the boundary integral equations

    NASA Technical Reports Server (NTRS)

    Hu, Fang Q.

    1994-01-01

    It is known that the exact analytic solutions of wave scattering by a circular cylinder, when they exist, are not in a closed form but in infinite series which converges slowly for high frequency waves. In this paper, we present a fast number solution for the scattering problem in which the boundary integral equations, reformulated from the Helmholtz equation, are solved using a Fourier spectral method. It is shown that the special geometry considered here allows the implementation of the spectral method to be simple and very efficient. The present method differs from previous approaches in that the singularities of the integral kernels are removed and dealt with accurately. The proposed method preserves the spectral accuracy and is shown to have an exponential rate of convergence. Aspects of efficient implementation using FFT are discussed. Moreover, the boundary integral equations of combined single and double-layer representation are used in the present paper. This ensures the uniqueness of the numerical solution for the scattering problem at all frequencies. Although a strongly singular kernel is encountered for the Neumann boundary conditions, we show that the hypersingularity can be handled easily in the spectral method. Numerical examples that demonstrate the validity of the method are also presented.

  20. NUMERICAL MODELING OF CONTAMINANT TRANSPORT IN FRACTURED POROUS MEDIA USING MIXED FINITE ELEMENT AND FINITE VOLUME METHODS

    SciTech Connect

    Taylor, G.; Dong, C.; Sun, S.

    2010-03-18

    A mathematical model for contaminant species passing through fractured porous media is presented. In the numerical model, we combine two locally conservative methods, i.e. mixed finite element (MFE) and the finite volume methods. Adaptive triangle mesh is used for effective treatment of the fractures. A hybrid MFE method is employed to provide an accurate approximation of velocities field for both the fractures and matrix which are crucial to the convection part of the transport equation. The finite volume method and the standard MFE method are used to approximate the convection and dispersion terms respectively. The model is used to investigate the interaction of adsorption with transport and to extract information on effective adsorption distribution coefficients. Numerical examples in different fractured media illustrate the robustness and efficiency of the proposed numerical model.

  1. A Numerical Method for Obtaining Monoenergetic Neutron Flux Distributions and Transmissions in Multiple-Region Slabs

    NASA Technical Reports Server (NTRS)

    Schneider, Harold

    1959-01-01

    This method is investigated for semi-infinite multiple-slab configurations of arbitrary width, composition, and source distribution. Isotropic scattering in the laboratory system is assumed. Isotropic scattering implies that the fraction of neutrons scattered in the i(sup th) volume element or subregion that will make their next collision in the j(sup th) volume element or subregion is the same for all collisions. These so-called "transfer probabilities" between subregions are calculated and used to obtain successive-collision densities from which the flux and transmission probabilities directly follow. For a thick slab with little or no absorption, a successive-collisions technique proves impractical because an unreasonably large number of collisions must be followed in order to obtain the flux. Here the appropriate integral equation is converted into a set of linear simultaneous algebraic equations that are solved for the average total flux in each subregion. When ordinary diffusion theory applies with satisfactory precision in a portion of the multiple-slab configuration, the problem is solved by ordinary diffusion theory, but the flux is plotted only in the region of validity. The angular distribution of neutrons entering the remaining portion is determined from the known diffusion flux and the remaining region is solved by higher order theory. Several procedures for applying the numerical method are presented and discussed. To illustrate the calculational procedure, a symmetrical slab ia vacuum is worked by the numerical, Monte Carlo, and P(sub 3) spherical harmonics methods. In addition, an unsymmetrical double-slab problem is solved by the numerical and Monte Carlo methods. The numerical approach proved faster and more accurate in these examples. Adaptation of the method to anisotropic scattering in slabs is indicated, although no example is included in this paper.

  2. Simultaneous source-mask optimization: a numerical combining method

    NASA Astrophysics Data System (ADS)

    Mülders, Thomas; Domnenko, Vitaliy; Küchler, Bernd; Klimpel, Thomas; Stock, Hans-Jürgen; Poonawala, Amyn A.; Taravade, Kunal N.; Stanton, William A.

    2010-09-01

    A new method for simultaneous Source-Mask Optimization (SMO) is presented. In order to produce optimum imaging fidelity with respect to exposure lattitude, depth of focus (DoF) and mask error enhancement factor (MEEF) the presented method aims to leverage both, the available degrees of freedom of a pixelated source and those available for the mask layout. The approach described in this paper is designed as to work with dissected mask polygons. The dissection of the mask patterns is to be performed in advance (before SMO) with the Synopsys Proteus OPC engine, providing the available degrees of freedom for mask pattern optimization. This is similar to mask optimization done for optical proximity correction (OPC). Additionally, however, the illumination source will be simultaneously optimized. The SMO approach borrows many of the performance enhancement methods of OPC software for mask correction, but is especially designed as to simultaneously optimize a pixelated source shape as nowadays available in production environments. Designed as a numerical optimization approach the method is able to assess in acceptable times several hundreds of thousands source-mask combinations for small, critical layout snippets. This allows a global optimization scheme to be applied to the SMO problem which is expected to better explore the optimization space and thus to yield an improved solution quality compared to local optimizations methods. The method is applied to an example system for investigating the impact of source constraints on the SMO results. Also, it is investigated how well possibly conflicting goals of low MEEF and large DoF can be balanced.

  3. Branch switching at Hopf bifurcation analysis via asymptotic numerical method: Application to nonlinear free vibrations of rotating beams

    NASA Astrophysics Data System (ADS)

    Bekhoucha, Ferhat; Rechak, Said; Duigou, Laëtitia; Cadou, Jean-Marc

    2015-05-01

    This paper deals with the computation of backbone curves bifurcated from a Hopf bifurcation point in the framework of nonlinear free vibrations of a rotating flexible beams. The intrinsic and geometrical equations of motion for anisotropic beams subjected to large displacements are used and transformed with Galerkin and harmonic balance methods to one quadratic algebraic equation involving one parameter, the pulsation. The latter is treated with the asymptotic numerical method using Padé approximants. An algorithm, equivalent to the Lyapunov-Schmidt reduction is proposed, to compute the bifurcated branches accurately from a Hopf bifurcation point, with singularity of co-rank 2, related to a conservative and gyroscopic dynamical system steady state, toward a nonlinear periodic state. Numerical tests dealing with clamped, isotropic and composite, rotating beams show the reliability of the proposed method reinforced by accurate results.

  4. Introduction to finite-difference methods for numerical fluid dynamics

    SciTech Connect

    Scannapieco, E.; Harlow, F.H.

    1995-09-01

    This work is intended to be a beginner`s exercise book for the study of basic finite-difference techniques in computational fluid dynamics. It is written for a student level ranging from high-school senior to university senior. Equations are derived from basic principles using algebra. Some discussion of partial-differential equations is included, but knowledge of calculus is not essential. The student is expected, however, to have some familiarity with the FORTRAN computer language, as the syntax of the computer codes themselves is not discussed. Topics examined in this work include: one-dimensional heat flow, one-dimensional compressible fluid flow, two-dimensional compressible fluid flow, and two-dimensional incompressible fluid flow with additions of the equations of heat flow and the {Kappa}-{epsilon} model for turbulence transport. Emphasis is placed on numerical instabilities and methods by which they can be avoided, techniques that can be used to evaluate the accuracy of finite-difference approximations, and the writing of the finite-difference codes themselves. Concepts introduced in this work include: flux and conservation, implicit and explicit methods, Lagrangian and Eulerian methods, shocks and rarefactions, donor-cell and cell-centered advective fluxes, compressible and incompressible fluids, the Boussinesq approximation for heat flow, Cartesian tensor notation, the Boussinesq approximation for the Reynolds stress tensor, and the modeling of transport equations. A glossary is provided which defines these and other terms.

  5. Advanced numerical methods and software approaches for semiconductor device simulation

    SciTech Connect

    CAREY,GRAHAM F.; PARDHANANI,A.L.; BOVA,STEVEN W.

    2000-03-23

    In this article the authors concisely present several modern strategies that are applicable to drift-dominated carrier transport in higher-order deterministic models such as the drift-diffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of upwind and artificial dissipation schemes, generalization of the traditional Scharfetter-Gummel approach, Petrov-Galerkin and streamline-upwind Petrov Galerkin (SUPG), entropy variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of the methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. They have included numerical examples from the recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and they emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, they briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.

  6. a Numerical Method for Stability Analysis of Pinned Flexible Mechanisms

    NASA Astrophysics Data System (ADS)

    Beale, D. G.; Lee, S. W.

    1996-05-01

    A technique is presented to investigate the stability of mechanisms with pin-jointed flexible members. The method relies on a special floating frame from which elastic link co-ordinates are defined. Energies are easily developed for use in a Lagrange equation formulation, leading to a set of non-linear and mixed ordinary differential-algebraic equations of motion with constraints. Stability and bifurcation analysis is handled using a numerical procedure (generalized co-ordinate partitioning) that avoids the tedious and difficult task of analytically reducing the system of equations to a number equalling the system degrees of freedom. The proposed method was then applied to (1) a slider-crank mechanism with a flexible connecting rod and crank of constant rotational speed, and (2) a four-bar linkage with a flexible coupler with a constant speed crank. In both cases, a single pinned-pinned beam bending mode is employed to develop resonance curves and stability boundaries in the crank length-crank speed parameter plane. Flip and fold bifurcations are common occurrences in both mechanisms. The accuracy of the proposed method was also verified by comparison with previous experimental results [1].

  7. Numerical Simulations of Granular Dynamics: Method and Tests

    NASA Astrophysics Data System (ADS)

    Richardson, Derek C.; Walsh, K. J.; Murdoch, N.; Michel, P.; Schwartz, S. R.

    2010-10-01

    We present a new particle-based numerical method for the simulation of granular dynamics, with application to motions of particles (regolith) on small solar system bodies and planetary surfaces [1]. The method employs the parallel N-body tree code pkdgrav [2] to search for collisions and compute particle trajectories. Particle confinement is achieved by combining arbitrary combinations of four provided wall primitives, namely infinite plane, finite disk, infinite cylinder, and finite cylinder, and degenerate cases of these. Various wall movements, including translation, oscillation, and rotation, are supported. Several tests of the method are presented, including a model granular "atmosphere” that achieves correct energy equipartition, and a series of tumbler simulations that compare favorably with actual laboratory experiments [3]. DCR and SRS acknowledge NASA Grant No. NNX08AM39G and NSF Grant No. AST0524875; KJW, the Poincaré Fellowship at OCA; NM, Thales Alenia Space and The Open University; and PM and NM, the French Programme National de Planétologie. References: [1] Richardson et al. (2010), Icarus, submitted; [2] Cf. Richardson et al. (2009), P&SS 57, 183 and references therein; [3] Brucks et al. (2007), PRE 75, 032301-1-032301-4.

  8. Advanced Numerical Methods and Software Approaches for Semiconductor Device Simulation

    DOE PAGESBeta

    Carey, Graham F.; Pardhanani, A. L.; Bova, S. W.

    2000-01-01

    In this article we concisely present several modern strategies that are applicable to driftdominated carrier transport in higher-order deterministic models such as the driftdiffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of “upwind” and artificial dissipation schemes, generalization of the traditional Scharfetter – Gummel approach, Petrov – Galerkin and streamline-upwind Petrov Galerkin (SUPG), “entropy” variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of themore » methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. We have included numerical examples from our recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and we emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, we briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.« less

  9. Petermann I and II spot size: Accurate semi analytical description involving Nelder-Mead method of nonlinear unconstrained optimization and three parameter fundamental modal field

    NASA Astrophysics Data System (ADS)

    Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal

    2013-01-01

    A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.

  10. Device and method for accurately measuring concentrations of airborne transuranic isotopes

    DOEpatents

    McIsaac, Charles V.; Killian, E. Wayne; Grafwallner, Ervin G.; Kynaston, Ronnie L.; Johnson, Larry O.; Randolph, Peter D.

    1996-01-01

    An alpha continuous air monitor (CAM) with two silicon alpha detectors and three sample collection filters is described. This alpha CAM design provides continuous sampling and also measures the cumulative transuranic (TRU), i.e., plutonium and americium, activity on the filter, and thus provides a more accurate measurement of airborne TRU concentrations than can be accomplished using a single fixed sample collection filter and a single silicon alpha detector.

  11. Device and method for accurately measuring concentrations of airborne transuranic isotopes

    DOEpatents

    McIsaac, C.V.; Killian, E.W.; Grafwallner, E.G.; Kynaston, R.L.; Johnson, L.O.; Randolph, P.D.

    1996-09-03

    An alpha continuous air monitor (CAM) with two silicon alpha detectors and three sample collection filters is described. This alpha CAM design provides continuous sampling and also measures the cumulative transuranic (TRU), i.e., plutonium and americium, activity on the filter, and thus provides a more accurate measurement of airborne TRU concentrations than can be accomplished using a single fixed sample collection filter and a single silicon alpha detector. 7 figs.

  12. THE EVALUATION OF METHODS FOR CREATING DEFENSIBLE, REPEATABLE, OBJECTIVE AND ACCURATE TOLERANCE VALUES

    EPA Science Inventory

    In the field of bioassessment, tolerance has traditionally referred to the degree to which organisms can withstand environmental degradation. This concept has been around for many years and its use is widespread. In numerous cases, tolerance values (TVs) have been assigned to i...

  13. Numerical methods for solving moment equations in kinetic theory of neuronal network dynamics

    NASA Astrophysics Data System (ADS)

    Rangan, Aaditya V.; Cai, David; Tao, Louis

    2007-02-01

    Recently developed kinetic theory and related closures for neuronal network dynamics have been demonstrated to be a powerful theoretical framework for investigating coarse-grained dynamical properties of neuronal networks. The moment equations arising from the kinetic theory are a system of (1 + 1)-dimensional nonlinear partial differential equations (PDE) on a bounded domain with nonlinear boundary conditions. The PDEs themselves are self-consistently specified by parameters which are functions of the boundary values of the solution. The moment equations can be stiff in space and time. Numerical methods are presented here for efficiently and accurately solving these moment equations. The essential ingredients in our numerical methods include: (i) the system is discretized in time with an implicit Euler method within a spectral deferred correction framework, therefore, the PDEs of the kinetic theory are reduced to a sequence, in time, of boundary value problems (BVPs) with nonlinear boundary conditions; (ii) a set of auxiliary parameters is introduced to recast the original BVP with nonlinear boundary conditions as BVPs with linear boundary conditions - with additional algebraic constraints on the auxiliary parameters; (iii) a careful combination of two Newton's iterates for the nonlinear BVP with linear boundary condition, interlaced with a Newton's iterate for solving the associated algebraic constraints is constructed to achieve quadratic convergence for obtaining the solutions with self-consistent parameters. It is shown that a simple fixed-point iteration can only achieve a linear convergence for the self-consistent parameters. The practicability and efficiency of our numerical methods for solving the moment equations of the kinetic theory are illustrated with numerical examples. It is further demonstrated that the moment equations derived from the kinetic theory of neuronal network dynamics can very well capture the coarse-grained dynamical properties of

  14. Tensor numerical methods in quantum chemistry: from Hartree-Fock to excitation energies.

    PubMed

    Khoromskaia, Venera; Khoromskij, Boris N

    2015-12-21

    We resume the recent successes of the grid-based tensor numerical methods and discuss their prospects in real-space electronic structure calculations. These methods, based on the low-rank representation of the multidimensional functions and integral operators, first appeared as an accurate tensor calculus for the 3D Hartree potential using 1D complexity operations, and have evolved to entirely grid-based tensor-structured 3D Hartree-Fock eigenvalue solver. It benefits from tensor calculation of the core Hamiltonian and two-electron integrals (TEI) in O(n log n) complexity using the rank-structured approximation of basis functions, electron densities and convolution integral operators all represented on 3D n × n × n Cartesian grids. The algorithm for calculating TEI tensor in a form of the Cholesky decomposition is based on multiple factorizations using algebraic 1D "density fitting" scheme, which yield an almost irreducible number of product basis functions involved in the 3D convolution integrals, depending on a threshold ε > 0. The basis functions are not restricted to separable Gaussians, since the analytical integration is substituted by high-precision tensor-structured numerical quadratures. The tensor approaches to post-Hartree-Fock calculations for the MP2 energy correction and for the Bethe-Salpeter excitation energies, based on using low-rank factorizations and the reduced basis method, were recently introduced. Another direction is towards the tensor-based Hartree-Fock numerical scheme for finite lattices, where one of the numerical challenges is the summation of electrostatic potentials of a large number of nuclei. The 3D grid-based tensor method for calculation of a potential sum on a L × L × L lattice manifests the linear in L computational work, O(L), instead of the usual O(L(3) log L) scaling by the Ewald-type approaches. PMID:26016539

  15. Comparison of Several Numerical Methods for Simulation of Compressible Shear Layers

    NASA Technical Reports Server (NTRS)

    Kennedy, Christopher A.; Carpenter, Mark H.

    1997-01-01

    An investigation is conducted on several numerical schemes for use in the computation of two-dimensional, spatially evolving, laminar variable-density compressible shear layers. Schemes with various temporal accuracies and arbitrary spatial accuracy for both inviscid and viscous terms are presented and analyzed. All integration schemes use explicit or compact finite-difference derivative operators. Three classes of schemes are considered: an extension of MacCormack's original second-order temporally accurate method, a new third-order variant of the schemes proposed by Rusanov and by Kutier, Lomax, and Warming (RKLW), and third- and fourth-order Runge-Kutta schemes. In each scheme, stability and formal accuracy are considered for the interior operators on the convection-diffusion equation U(sub t) + aU(sub x) = alpha U(sub xx). Accuracy is also verified on the nonlinear problem, U(sub t) + F(sub x) = 0. Numerical treatments of various orders of accuracy are chosen and evaluated for asymptotic stability. Formally accurate boundary conditions are derived for several sixth- and eighth-order central-difference schemes. Damping of high wave-number data is accomplished with explicit filters of arbitrary order. Several schemes are used to compute variable-density compressible shear layers, where regions of large gradients exist.

  16. Energetic pulses in exciton-phonon molecular chains and conservative numerical methods for quasilinear Hamiltonian systems.

    PubMed

    Lemesurier, Brenton

    2013-09-01

    The phenomenon of coherent energetic pulse propagation in exciton-phonon molecular chains such as α-helix protein is studied using an ODE system model of Davydov-Scott type, both with numerical studies using a new unconditionally stable fourth-order accurate energy-momentum conserving time discretization and with analytical explanation of the main numerical observations. Impulsive initial data associated with initial excitation of a single amide-I vibration by the energy released by ATP hydrolysis are used as well as the best current estimates of physical parameter values. In contrast to previous studies based on a proposed long-wave approximation by the nonlinear Schrödinger (NLS) equation and focusing on initial data resembling the soliton solutions of that equation, the results here instead lead to approximation by the third derivative nonlinear Schrödinger equation, giving a far better fit to observed behavior. A good part of the behavior is indeed explained well by the linear part of that equation, the Airy PDE, while other significant features do not fit any PDE approximation but are instead explained well by a linearized analysis of the ODE system. A convenient method is described for construction of the highly stable, accurate conservative time discretizations used, with proof of its desirable properties for a large class of Hamiltonian systems, including a variety of molecular models. PMID:24125294

  17. Numerical Methods for Forward and Inverse Problems in Discontinuous Media

    SciTech Connect

    Chartier, Timothy P.

    2011-03-08

    The research emphasis under this grant's funding is in the area of algebraic multigrid methods. The research has two main branches: 1) exploring interdisciplinary applications in which algebraic multigrid can make an impact and 2) extending the scope of algebraic multigrid methods with algorithmic improvements that are based in strong analysis.The work in interdisciplinary applications falls primarily in the field of biomedical imaging. Work under this grant demonstrated the effectiveness and robustness of multigrid for solving linear systems that result from highly heterogeneous finite element method models of the human head. The results in this work also give promise to medical advances possible with software that may be developed. Research to extend the scope of algebraic multigrid has been focused in several areas. In collaboration with researchers at the University of Colorado, Lawrence Livermore National Laboratory, and Los Alamos National Laboratory, the PI developed an adaptive multigrid with subcycling via complementary grids. This method has very cheap computing costs per iterate and is showing promise as a preconditioner for conjugate gradient. Recent work with Los Alamos National Laboratory concentrates on developing algorithms that take advantage of the recent advances in adaptive multigrid research. The results of the various efforts in this research could ultimately have direct use and impact to researchers for a wide variety of applications, including, astrophysics, neuroscience, contaminant transport in porous media, bi-domain heart modeling, modeling of tumor growth, and flow in heterogeneous porous media. This work has already led to basic advances in computational mathematics and numerical linear algebra and will continue to do so into the future.

  18. Numerical studies of the flux-to-current ratio method in the KIPT neutron source facility

    SciTech Connect

    Cao, Y.; Gohar, Y.; Zhong, Z.

    2013-07-01

    The reactivity of a subcritical assembly has to be monitored continuously in order to assure its safe operation. In this paper, the flux-to-current ratio method has been studied as an approach to provide the on-line reactivity measurement of the subcritical system. Monte Carlo numerical simulations have been performed using the KIPT neutron source facility model. It is found that the reactivity obtained from the flux-to-current ratio method is sensitive to the detector position in the subcritical assembly. However, if multiple detectors are located about 12 cm above the graphite reflector and 54 cm radially, the technique is shown to be very accurate in determining the k{sub eff} this facility in the range of 0.75 to 0.975. (authors)

  19. Optimal principal component analysis-based numerical phase aberration compensation method for digital holography.

    PubMed

    Sun, Jiasong; Chen, Qian; Zhang, Yuzhen; Zuo, Chao

    2016-03-15

    In this Letter, an accurate and highly efficient numerical phase aberration compensation method is proposed for digital holographic microscopy. Considering that most parts of the phase aberration resides in the low spatial frequency domain, a Fourier-domain mask is introduced to extract the aberrated frequency components, while rejecting components that are unrelated to the phase aberration estimation. Principal component analysis (PCA) is then performed only on the reduced-sized spectrum, and the aberration terms can be extracted from the first principal component obtained. Finally, by oversampling the reduced-sized aberration terms, the precise phase aberration map is obtained and thus can be compensated by multiplying with its conjugation. Because the phase aberration is estimated from the limited but more relevant raw data, the compensation precision is improved and meanwhile the computation time can be significantly reduced. Experimental results demonstrate that our proposed technique could achieve both high compensating accuracy and robustness compared with other developed compensation methods. PMID:26977692

  20. Time-Accurate Local Time Stepping and High-Order Time CESE Methods for Multi-Dimensional Flows Using Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary

    2013-01-01

    With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.

  1. A Simple yet Accurate Method for Students to Determine Asteroid Rotation Periods from Fragmented Light Curve Data

    ERIC Educational Resources Information Center

    Beare, R. A.

    2008-01-01

    Professional astronomers use specialized software not normally available to students to determine the rotation periods of asteroids from fragmented light curve data. This paper describes a simple yet accurate method based on Microsoft Excel[R] that enables students to find periods in asteroid light curve and other discontinuous time series data of…

  2. Hybrid Numerical Methods for Multiscale Simulations of Subsurface Biogeochemical Processes

    SciTech Connect

    Scheibe, Timothy D.; Tartakovsky, Alexandre M.; Tartakovsky, Daniel M.; Redden, George D.; Meakin, Paul

    2007-08-01

    Many subsurface flow and transport problems of importance today involve coupled non-linear flow, transport, and reaction in media exhibiting complex heterogeneity. In particular, problems involving biological mediation of reactions fall into this class of problems. Recent experimental research has revealed important details about the physical, chemical, and biological mechanisms involved in these processes at a variety of scales ranging from molecular to laboratory scales. However, it has not been practical or possible to translate detailed knowledge at small scales into reliable predictions of field-scale phenomena important for environmental management applications. A large assortment of numerical simulation tools have been developed, each with its own characteristic scale including molecular (e.g., molecular dynamics), microbial (e.g., cellular automata or particle individual-based models), pore (e.g., lattice-Boltzmann, pore network models, and discrete particle methods such as smoothed particle hydrodynamics) and continuum scales (e.g., traditional partial differential equations solved by finite difference or finite element methods). While many problems can be effectively addressed by one of these models at a single scale, some problems may require explicit integration of models across multiple scales. We are developing a hybrid multi-scale subsurface reactive transport modeling framework that integrates models with diverse representations of physics, chemistry and biology at different scales (sub-pore, pore and continuum). The modeling framework is being designed to take advantage of advanced computational technologies including parallel code components using the Common Component Architecture, parallel solvers, gridding, data and workflow management, and visualization. This paper describes the specific methods/codes being used at each scale, techniques used to directly and adaptively couple across model scales, and preliminary results of application to a

  3. Numerical Weather Predictions Evaluation Using Spatial Verification Methods

    NASA Astrophysics Data System (ADS)

    Tegoulias, I.; Pytharoulis, I.; Kotsopoulos, S.; Kartsios, S.; Bampzelis, D.; Karacostas, T.

    2014-12-01

    During the last years high-resolution numerical weather prediction simulations have been used to examine meteorological events with increased convective activity. Traditional verification methods do not provide the desired level of information to evaluate those high-resolution simulations. To assess those limitations new spatial verification methods have been proposed. In the present study an attempt is made to estimate the ability of the WRF model (WRF -ARW ver3.5.1) to reproduce selected days with high convective activity during the year 2010 using those feature-based verification methods. Three model domains, covering Europe, the Mediterranean Sea and northern Africa (d01), the wider area of Greece (d02) and central Greece - Thessaly region (d03) are used at horizontal grid-spacings of 15km, 5km and 1km respectively. By alternating microphysics (Ferrier, WSM6, Goddard), boundary layer (YSU, MYJ) and cumulus convection (Kain-­-Fritsch, BMJ) schemes, a set of twelve model setups is obtained. The results of those simulations are evaluated against data obtained using a C-Band (5cm) radar located at the centre of the innermost domain. Spatial characteristics are well captured but with a variable time lag between simulation results and radar data. Acknowledgements: This research is co­financed by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007-­-2013).

  4. Correcting errors in the optical path difference in Fourier spectroscopy: a new accurate method.

    PubMed

    Kauppinen, J; Kärkköinen, T; Kyrö, E

    1978-05-15

    A new computational method for calculating and correcting the errors of the optical path difference in Fourier spectrometers is presented. This method only requires an one-sided interferogram and a single well-separated line in the spectrum. The method also cancels out the linear phase error. The practical theory of the method is included, and an example of the progress of the method is illustrated by simulations. The method is also verified by several simulations in order to estimate its usefulness and accuracy. An example of the use of this method in practice is also given. PMID:20198027

  5. Anatomically accurate high resolution modeling of human whole heart electromechanics: A strongly scalable algebraic multigrid solver method for nonlinear deformation

    PubMed Central

    Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot

    2016-01-01

    Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which are not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate

  6. Anatomically accurate high resolution modeling of human whole heart electromechanics: A strongly scalable algebraic multigrid solver method for nonlinear deformation

    NASA Astrophysics Data System (ADS)

    Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot

    2016-01-01

    Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which is not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate

  7. Efficient 3D Acoustic Numerical modeling in the Logarithmic-grid using the Expanding Domain Method

    NASA Astrophysics Data System (ADS)

    Hong, B. R.; Chung, W.; Ko, H.; Bae, H. S.

    2015-12-01

    In the numerical modeling of seismic wave propagation by the use of a discrete computing domain, dispersion analysis is preceded by the determination of the spatial grid spacings in order to ensure accurate modeling results. Grid spacing is a function of wavelength, and the wavelength depends on the minimum velocity and maximum source frequency. Therefore, as the frequency increases, the number of grids increase and this leads to computational overburden. In order to reduce the computing complexity, coordinate transformation techniques such as Riemannian coordinates and logarithmic grid sets are proposed. Riemannian wave-field extrapolation is a way to reformulate the wave-field by expressing it in Riemannian coordinates. In the logarithmic grid, grid spacing changes logarithmically, so this enables us to reduce the number of grids compared to a conventional grid set. Furthermore, this could completely remove boundary reflections by extending the model dimensions. However, numerical modeling in the logarithmic grid is still inefficient because it is performed for whole model at every individual time step. In this study we applied the expanding domain method to the logarithmic modeling in order to improve computational efficiency. This method, based on amplitude comparison, excludes computations for zero wave-fields by considering a non-zero domain boundary. Numerical examples demonstrated that our new modeling method enhances computational efficiency maintaining accuracy compared with conventional modeling methods. In wider and higher-order dimensions, particularly, the efficiency of our modeling method increased. Our new modeling technique could also be applied to the generation of underwater target echo signals requiring high frequency analysis.

  8. Trigonometrically fitted two step hybrid method for the numerical integration of second order IVPs

    NASA Astrophysics Data System (ADS)

    Monovasilis, Th.; Kalogiratou, Z.; Simos, T. E.

    2016-06-01

    In this work we consider the numerical integration of second order ODEs where the first derivative is missing. We construct trigonometrically fitted two step hybrid methods. We apply the new methods on the numerical integration of several test problems.

  9. Novel methods for accurate identification, isolation, and genomic analysis of symptomatic microenvironments in atherosclerotic arteries.

    PubMed

    Slevin, Mark; Baldellou, Maribel; Hill, Elspeth; Alexander, Yvonne; McDowell, Garry; Murgatroyd, Christopher; Carroll, Michael; Degens, Hans; Krupinski, Jerzy; Rovira, Norma; Chowdhury, Mohammad; Serracino-Inglott, Ferdinand; Badimon, Lina

    2014-01-01

    A challenge facing surgeons is identification and selection of patients for carotid endarterectomy or coronary artery bypass/surgical intervention. While some patients with atherosclerosis develop unstable plaques liable to undergo thrombosis, others form more stable plaques and are asymptomatic. Identification of the cellular signaling mechanisms associated with production of the inflammatory, hemorrhagic lesions of mature heterogenic plaques will help significantly in our understanding of the differences in microenvironment associated with development of regions susceptible to rupture and thrombosis and may help to predict the risk of plaque rupture and guide surgical intervention to patients who will most benefit. Here, we demonstrate detailed and novel methodologies for successful and, more importantly, accurate and reproducible extraction, sampling, and analysis of micro-regions in stable and unstable coronary/carotid arteries. This information can be applied to samples from other origins and so should be useful for scientists working with micro-isolation techniques in all fields of biomedical science. PMID:24510873

  10. Retention projection enables accurate calculation of liquid chromatographic retention times across labs and methods.

    PubMed

    Abate-Pella, Daniel; Freund, Dana M; Ma, Yan; Simón-Manso, Yamil; Hollender, Juliane; Broeckling, Corey D; Huhman, David V; Krokhin, Oleg V; Stoll, Dwight R; Hegeman, Adrian D; Kind, Tobias; Fiehn, Oliver; Schymanski, Emma L; Prenni, Jessica E; Sumner, Lloyd W; Boswell, Paul G

    2015-09-18

    Identification of small molecules by liquid chromatography-mass spectrometry (LC-MS) can be greatly improved if the chromatographic retention information is used along with mass spectral information to narrow down the lists of candidates. Linear retention indexing remains the standard for sharing retention data across labs, but it is unreliable because it cannot properly account for differences in the experimental conditions used by various labs, even when the differences are relatively small and unintentional. On the other hand, an approach called "retention projection" properly accounts for many intentional differences in experimental conditions, and when combined with a "back-calculation" methodology described recently, it also accounts for unintentional differences. In this study, the accuracy of this methodology is compared with linear retention indexing across eight different labs. When each lab ran a test mixture under a range of multi-segment gradients and flow rates they selected independently, retention projections averaged 22-fold more accurate for uncharged compounds because they properly accounted for these intentional differences, which were more pronounced in steep gradients. When each lab ran the test mixture under nominally the same conditions, which is the ideal situation to reproduce linear retention indices, retention projections still averaged 2-fold more accurate because they properly accounted for many unintentional differences between the LC systems. To the best of our knowledge, this is the most successful study to date aiming to calculate (or even just to reproduce) LC gradient retention across labs, and it is the only study in which retention was reliably calculated under various multi-segment gradients and flow rates chosen independently by labs. PMID:26292625

  11. Assessment of numerical methods for the solution of fluid dynamics equations for nonlinear resonance systems

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Yang, H. Q.

    1989-01-01

    The capability of accurate nonlinear flow analysis of resonance systems is essential in many problems, including combustion instability. Classical numerical schemes are either too diffusive or too dispersive especially for transient problems. In the last few years, significant progress has been made in the numerical methods for flows with shocks. The objective was to assess advanced shock capturing schemes on transient flows. Several numerical schemes were tested including TVD, MUSCL, ENO, FCT, and Riemann Solver Godunov type schemes. A systematic assessment was performed on scalar transport, Burgers' and gas dynamic problems. Several shock capturing schemes are compared on fast transient resonant pipe flow problems. A system of 1-D nonlinear hyperbolic gas dynamics equations is solved to predict propagation of finite amplitude waves, the wave steepening, formation, propagation, and reflection of shocks for several hundred wave cycles. It is shown that high accuracy schemes can be used for direct, exact nonlinear analysis of combustion instability problems, preserving high harmonic energy content for long periods of time.

  12. A Numerical Method for Determining Diffusivity from Annealing Experiments

    NASA Astrophysics Data System (ADS)

    Harris-Kuhlman, K. R.; Kulcinski, G. L.

    1998-12-01

    Terrestrial analogs of lunar ilmenite (FeTiO3) have been implanted with solar-wind energy 4He at 4 keV and 3He at 3 keV using Plasma Source Ion Implantation (PSII). Isochronal annealing of the samples revealed thermally induced 4He evolution similar to the helium release of the Apollo 11 regoliths reported by Pepin, et. al., [1970]. These annealing experiments are analyzed with a three dimensional numerical method based on Fick's law for diffusion. An iterative method is used to calculate the diffusivity. The code uses an assumed diffusivity to calculate the amount of gas released during a temperature step. The initial depth profile of the implanted species is generated using the TRIM electronic stopping code [Ziegler, 1996]. The calculated value is compared to the measured value and a linear regression is used to calculate a new diffusivity until there is convergence within a specified tolerance level. The diffusivity as a function of temperature is then fitted to an Arrhenius equation. Analysis of results for 4 keV 4He on ilmenite shows two distinct regions of Arrehnius behavior with activation energies of 0.5 +/- 0.1 eV at emperatures below 800 deg C and 1.5 +/- 0.2 eV at temperatures from 800 deg C to 1100 deg C. Pepin, R. O., L. E. Nyquist, D. Phinney, and D. C. Black (1970) "Rare Gases in Apollo 11 Lunar Material," Proceedings of the Apollo 11 Lunar Science Conference, 2, pp. 1435-1454. Ziegler, J. P. (1996) SRIM Instruction Manual: The Stopping and Range of Ions in Matter, (Yorktown, New York: IBM - Research); based on Ziegler, J. P., J. P. Biersack and U. Littmark, The Stopping and Range of Ions in Solids, (New York: Pergamon Press, 1985).

  13. Spectrophotometric methods for the evaluation of acidity constants-I Numerical methods for single equilibria.

    PubMed

    Asuero, A G; Navas, M J; Jiminez-Trillo, J L

    1986-02-01

    The spectrophotometric methods applicable to the numerical evaluation of acidity constants of monobasic acids are briefly reviewed. The equations are presented in a form suitable for easy calculation with a programmable pocket calculator. The aim of this paper is to cover a gap in the education analytical literature. PMID:18964064

  14. Active Problem Solving and Applied Research Methods in a Graduate Course on Numerical Methods

    ERIC Educational Resources Information Center

    Maase, Eric L.; High, Karen A.

    2008-01-01

    "Chemical Engineering Modeling" is a first-semester graduate course traditionally taught in a lecture format at Oklahoma State University. The course as taught by the author for the past seven years focuses on numerical and mathematical methods as necessary skills for incoming graduate students. Recent changes to the course have included Visual…

  15. Accurate Ionization Potentials and Electron Affinities of Acceptor Molecules IV: Electron-Propagator Methods.

    PubMed

    Dolgounitcheva, O; Díaz-Tinoco, Manuel; Zakrzewski, V G; Richard, Ryan M; Marom, Noa; Sherrill, C David; Ortiz, J V

    2016-02-01

    Comparison of ab initio electron-propagator predictions of vertical ionization potentials and electron affinities of organic, acceptor molecules with benchmark calculations based on the basis set-extrapolated, coupled cluster single, double, and perturbative triple substitution method has enabled identification of self-energy approximations with mean, unsigned errors between 0.1 and 0.2 eV. Among the self-energy approximations that neglect off-diagonal elements in the canonical, Hartree-Fock orbital basis, the P3 method for electron affinities, and the P3+ method for ionization potentials provide the best combination of accuracy and computational efficiency. For approximations that consider the full self-energy matrix, the NR2 methods offer the best performance. The P3+ and NR2 methods successfully identify the correct symmetry label of the lowest cationic state in two cases, naphthalenedione and benzoquinone, where some other methods fail. PMID:26730459

  16. A New Cation-Exchange Method for Accurate Field Speciation of Hexavalent Chromium

    USGS Publications Warehouse

    Ball, James W.; McCleskey, R. Blaine

    2003-01-01

    A new cation-exchange method for field speciation of Cr(VI) has been developed to meet present stringent regulatory standards and to overcome the limitations of existing methods. The new method allows measurement of Cr(VI) concentrations as low as 0.05 micrograms per liter, storage of samples for at least several weeks prior to analysis, and use of readily available analytical instrumentation. The sensitivity, accuracy, and precision of the determination in waters over the pH range of 2 to 11 and Fe concentrations up to 1 milligram per liter are equal to or better than existing methods such as USEPA method 218.6. Time stability of preserved samples is a significant advantage over the 24-hour time constraint specified for USEPA method 218.6.

  17. Accurate Learning with Few Atlases (ALFA): an algorithm for MRI neonatal brain extraction and comparison with 11 publicly available methods

    PubMed Central

    Serag, Ahmed; Blesa, Manuel; Moore, Emma J.; Pataky, Rozalia; Sparrow, Sarah A.; Wilkinson, A. G.; Macnaught, Gillian; Semple, Scott I.; Boardman, James P.

    2016-01-01

    Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases ‘uniformly’ distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course. PMID:27010238

  18. Accurate Learning with Few Atlases (ALFA): an algorithm for MRI neonatal brain extraction and comparison with 11 publicly available methods.

    PubMed

    Serag, Ahmed; Blesa, Manuel; Moore, Emma J; Pataky, Rozalia; Sparrow, Sarah A; Wilkinson, A G; Macnaught, Gillian; Semple, Scott I; Boardman, James P

    2016-01-01

    Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases 'uniformly' distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course. PMID:27010238

  19. Accurate Learning with Few Atlases (ALFA): an algorithm for MRI neonatal brain extraction and comparison with 11 publicly available methods

    NASA Astrophysics Data System (ADS)

    Serag, Ahmed; Blesa, Manuel; Moore, Emma J.; Pataky, Rozalia; Sparrow, Sarah A.; Wilkinson, A. G.; MacNaught, Gillian; Semple, Scott I.; Boardman, James P.

    2016-03-01

    Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases ‘uniformly’ distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course.

  20. An accurate method for the determination of carboxyhemoglobin in postmortem blood using GC-TCD.

    PubMed

    Lewis, Russell J; Johnson, Robert D; Canfield, Dennis V

    2004-01-01

    During the investigation of aviation accidents, postmortem samples from accident victims are submitted to the FAA's Civil Aerospace Medical Institute for toxicological analysis. In order to determine if an accident victim was exposed to an in-flight/postcrash fire or faulty heating/exhaust system, the analysis of carbon monoxide (CO) is conducted. Although our laboratory predominantly uses a spectrophotometric method for the determination of carboxyhemoglobin (COHb), we consider it essential to confirm with a second technique based on a different analytical principle. Our laboratory encountered difficulties with many of our postmortem samples while employing a commonly used GC method. We believed these problems were due to elevated methemoglobin (MetHb) concentration in our specimens. MetHb does not bind CO; therefore, elevated MetHb levels will result in a loss of CO-binding capacity. Because most commonly employed GC methods determine %COHb from a ratio of unsaturated blood to CO-saturated blood, a loss of CO-binding capacity will result in an erroneously high %COHb value. Our laboratory has developed a new GC method for the determination of %COHb that incorporates sodium dithionite, which will reduce any MetHb present to Hb. Using blood controls ranging from 1% to 67% COHb, we found no statistically significant differences between %COHb results from our new GC method and our spectrophotometric method. To validate the new GC method, postmortem samples were analyzed with our existing spectrophotometric method, a GC method commonly used without reducing agent, and our new GC method with the addition of sodium dithionite. As expected, we saw errors up to and exceeding 50% when comparing the unreduced GC results with our spectrophotometric method. With our new GC procedure, the error was virtually eliminated. PMID:14987426

  1. k-Space Image Correlation Spectroscopy: A Method for Accurate Transport Measurements Independent of Fluorophore Photophysics

    PubMed Central

    Kolin, David L.; Ronis, David; Wiseman, Paul W.

    2006-01-01

    We present the theory and application of reciprocal space image correlation spectroscopy (kICS). This technique measures the number density, diffusion coefficient, and velocity of fluorescently labeled macromolecules in a cell membrane imaged on a confocal, two-photon, or total internal reflection fluorescence microscope. In contrast to r-space correlation techniques, we show kICS can recover accurate dynamics even in the presence of complex fluorophore photobleaching and/or “blinking”. Furthermore, these quantities can be calculated without nonlinear curve fitting, or any knowledge of the beam radius of the exciting laser. The number densities calculated by kICS are less sensitive to spatial inhomogeneity of the fluorophore distribution than densities measured using image correlation spectroscopy. We use simulations as a proof-of-principle to show that number densities and transport coefficients can be extracted using this technique. We present calibration measurements with fluorescent microspheres imaged on a confocal microscope, which recover Stokes-Einstein diffusion coefficients, and flow velocities that agree with single particle tracking measurements. We also show the application of kICS to measurements of the transport dynamics of α5-integrin/enhanced green fluorescent protein constructs in a transfected CHO cell imaged on a total internal reflection fluorescence microscope using charge-coupled device area detection. PMID:16861272

  2. Parallel Higher-order Finite Element Method for Accurate Field Computations in Wakefield and PIC Simulations

    SciTech Connect

    Candel, A.; Kabel, A.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.; Ko, K.; /SLAC

    2009-06-19

    Over the past years, SLAC's Advanced Computations Department (ACD), under SciDAC sponsorship, has developed a suite of 3D (2D) parallel higher-order finite element (FE) codes, T3P (T2P) and Pic3P (Pic2P), aimed at accurate, large-scale simulation of wakefields and particle-field interactions in radio-frequency (RF) cavities of complex shape. The codes are built on the FE infrastructure that supports SLAC's frequency domain codes, Omega3P and S3P, to utilize conformal tetrahedral (triangular)meshes, higher-order basis functions and quadratic geometry approximation. For time integration, they adopt an unconditionally stable implicit scheme. Pic3P (Pic2P) extends T3P (T2P) to treat charged-particle dynamics self-consistently using the PIC (particle-in-cell) approach, the first such implementation on a conformal, unstructured grid using Whitney basis functions. Examples from applications to the International Linear Collider (ILC), Positron Electron Project-II (PEP-II), Linac Coherent Light Source (LCLS) and other accelerators will be presented to compare the accuracy and computational efficiency of these codes versus their counterparts using structured grids.

  3. An improved method for accurate and rapid measurement of flight performance in Drosophila.

    PubMed

    Babcock, Daniel T; Ganetzky, Barry

    2014-01-01

    Drosophila has proven to be a useful model system for analysis of behavior, including flight. The initial flight tester involved dropping flies into an oil-coated graduated cylinder; landing height provided a measure of flight performance by assessing how far flies will fall before producing enough thrust to make contact with the wall of the cylinder. Here we describe an updated version of the flight tester with four major improvements. First, we added a "drop tube" to ensure that all flies enter the flight cylinder at a similar velocity between trials, eliminating variability between users. Second, we replaced the oil coating with removable plastic sheets coated in Tangle-Trap, an adhesive designed to capture live insects. Third, we use a longer cylinder to enable more accurate discrimination of flight ability. Fourth we use a digital camera and imaging software to automate the scoring of flight performance. These improvements allow for the rapid, quantitative assessment of flight behavior, useful for large datasets and large-scale genetic screens. PMID:24561810

  4. Spectral neighbor analysis method for automated generation of quantum-accurate interatomic potentials

    SciTech Connect

    Thompson, A.P.; Swiler, L.P.; Trott, C.R.; Foiles, S.M.; Tucker, G.J.

    2015-03-15

    We present a new interatomic potential for solids and liquids called Spectral Neighbor Analysis Potential (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected onto a basis of hyperspherical harmonics in four dimensions. The bispectrum components are the same bond-orientational order parameters employed by the GAP potential [1]. The SNAP potential, unlike GAP, assumes a linear relationship between atom energy and bispectrum components. The linear SNAP coefficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. We demonstrate that a previously unnoticed symmetry property can be exploited to reduce the computational cost of the force calculations by more than one order of magnitude. We present results for a SNAP potential for tantalum, showing that it accurately reproduces a range of commonly calculated properties of both the crystalline solid and the liquid phases. In addition, unlike simpler existing potentials, SNAP correctly predicts the energy barrier for screw dislocation migration in BCC tantalum.

  5. A method for accurate determination of terminal sequences of viral genomic RNA.

    PubMed

    Weng, Z; Xiong, Z

    1995-09-01

    A combination of ligation-anchored PCR and anchored cDNA cloning techniques were used to clone the termini of the saguaro cactus virus (SCV) RNA genome. The terminal sequences of the viral genome were subsequently determined from the clones. The 5' terminus was cloned by ligation-anchored PCR, whereas the 3' terminus was obtained by a technique we term anchored cDNA cloning. In anchored cDNA cloning, an anchor oligonucleotide was prepared by phosphorylation at the 5' end, followed by addition of a dideoxynucleotide at the 3' end to block the free hydroxyl group. The 5' end of the anchor was subsequently ligated to the 3' end of SCV RNA. The anchor-ligated, chimerical viral RNA was then reverse-transcribed into cDNA using a primer complementary to the anchor. The cDNA containing the complete 3'-terminal sequence was converted into ds-cDNA, cloned, and sequenced. Two restriction sites, one within the viral sequence and one within the primer sequence, were used to facilitate cloning. The combination of these techniques proved to be an easy and accurate way to determine the terminal sequences of SCV RNA genome and should be applicable to any other RNA molecules with unknown terminal sequences. PMID:9132274

  6. Efficient numerical method for computation of thermohydrodynamics of laminar lubricating films

    NASA Technical Reports Server (NTRS)

    Elrod, Harold G.

    1989-01-01

    The purpose of this paper is to describe an accurate, yet economical, method for computing temperature effects in laminar lubricating films in two dimensions. The procedure presented here is a sequel to one presented in Leeds in 1986 that was carried out for the one-dimensional case. Because of the marked dependence of lubricant viscosity on temperature, the effect of viscosity variation both across and along a lubricating film can dwarf other deviations from ideal constant-property lubrication. In practice, a thermohydrodynamics program will involve simultaneous solution of the film lubrication problem, together with heat conduction in a solid, complex structure. The extent of computation required makes economy in numerical processing of utmost importance. In pursuit of such economy, we here use techniques similar to those for Gaussian quadrature. We show that, for many purposes, the use of just two properly positioned temperatures (Lobatto points) characterizes well the transverse temperature distribution.

  7. Archimedes Revisited: A Faster, Better, Cheaper Method of Accurately Measuring the Volume of Small Objects

    ERIC Educational Resources Information Center

    Hughes, Stephen W.

    2005-01-01

    A little-known method of measuring the volume of small objects based on Archimedes' principle is described, which involves suspending an object in a water-filled container placed on electronic scales. The suspension technique is a variation on the hydrostatic weighing technique used for measuring volume. The suspension method was compared with two…

  8. NUMERICAL METHODS FOR THE SIMULATION OF HIGH INTENSITY HADRON SYNCHROTRONS.

    SciTech Connect

    LUCCIO, A.; D'IMPERIO, N.; MALITSKY, N.

    2005-09-12

    Numerical algorithms for PIC simulation of beam dynamics in a high intensity synchrotron on a parallel computer are presented. We introduce numerical solvers of the Laplace-Poisson equation in the presence of walls, and algorithms to compute tunes and twiss functions in the presence of space charge forces. The working code for the simulation here presented is SIMBAD, that can be run as stand alone or as part of the UAL (Unified Accelerator Libraries) package.

  9. Improved method for retrieving the aerosol optical properties without the numerical derivative for Raman-Mie lidar

    NASA Astrophysics Data System (ADS)

    Gong, Wei; Wang, Wei; Mao, Feiyue; Zhang, Jinye

    2015-08-01

    Raman-Mie light detection and ranging (lidar) is a very useful tool for research on atmospheric aerosol optical properties with high spatial-temporal resolution. However, many uncertainties still exist in data retrieval because traditional retrieval methods need to calculate the numerical derivative for aerosol extinction coefficient (AEC), which may cause large errors, particularly with low signal-to-noise ratios. Thus, we present an improved method for retrieving aerosol optical properties. We re-formulate the N2-Raman lidar equation to obtain an unknown term which contains the AEC at the Mie wavelength. We replace the unknown term of the equation in traditional method for retrieving aerosol backscatter coefficient (ABC). Then, AEC can be retrieved by the accurate ABC and Mie lidar signal without calculating the numerical derivative. Tests on the simulated and measured signals show that results of our method and those of the traditional method have similar tendencies. However, our method is more accurate and robust, and the significant errors of AEC caused by the numerical derivative can be reduced.

  10. Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.

    PubMed

    Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262

  11. Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels

    PubMed Central

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262

  12. Numerical solution of first order initial value problem using 4-stage sixth order Gauss-Kronrod-Radau IIA method

    NASA Astrophysics Data System (ADS)

    Ying, Teh Yuan; Yaacob, Nazeeruddin

    2013-04-01

    In this paper, a new implicit Runge-Kutta method which based on a 4-point Gauss-Kronrod-Radau II quadrature formula is developed. The resulting implicit method is a 4-stage sixth order Gauss-Kronrod-Radau IIA method, or in brief as GKRM(4,6)-IIA. GKRM(4,6)-IIA requires four function of evaluations at each integration step and it gives accuracy of order six. In addition, GKRM(4,6)-IIA has stage order four and being L-stable. Numerical experiments compare the accuracy between GKRM(4,6)-IIA and the classical 3-stage sixth order Gauss-Legendre method in solving some test problems. Numerical results reveal that GKRM(4,6)-IIA is more accurate than the 3-stage sixth order Gauss-Legendre method because GKRM(4,6)-IIA has higher stage order.

  13. Numerical solution of first order initial value problem using 7-stage tenth order Gauss-Kronrod-Lobatto IIIA method

    NASA Astrophysics Data System (ADS)

    Ying, Teh Yuan; Yaacob, Nazeeruddin

    2013-04-01

    In this paper, a new implicit Runge-Kutta method which based on a 7-point Gauss-Kronrod-Lobatto quadrature formula is developed. The resulting implicit method is a 7-stage tenth order Gauss-Kronrod-Lobatto IIIA method, or in brief as GKLM(7,10)-IIIA. GKLM(7,10)-IIIA requires seven function of evaluations at each integration step and it gives accuracy of order ten. In addition, GKLM(7,10)-IIIA has stage order seven and being A-stable. Numerical experiments compare the accuracy between GKLM(7,10)-IIIA and the classical 5-stage tenth order Gauss-Legendre method in solving some test problems. Numerical results reveal that GKLM(7,10)-IIIA is more accurate than the 5-stage tenth order Gauss-Legendre method because GKLM(7,10)-IIIA has higher stage order.

  14. A computationally efficient and accurate numerical representation of thermodynamic properties of steam and water for computations of non-equilibrium condensing steam flow in steam turbines

    NASA Astrophysics Data System (ADS)

    Hrubý, Jan

    2012-04-01

    Mathematical modeling of the non-equilibrium condensing transonic steam flow in the complex 3D geometry of a steam turbine is a demanding problem both concerning the physical concepts and the required computational power. Available accurate formulations of steam properties IAPWS-95 and IAPWS-IF97 require much computation time. For this reason, the modelers often accept the unrealistic ideal-gas behavior. Here we present a computation scheme based on a piecewise, thermodynamically consistent representation of the IAPWS-95 formulation. Density and internal energy are chosen as independent variables to avoid variable transformations and iterations. On the contrary to the previous Tabular Taylor Series Expansion Method, the pressure and temperature are continuous functions of the independent variables, which is a desirable property for the solution of the differential equations of the mass, energy, and momentum conservation for both phases.

  15. Improved light microscopy counting method for accurately counting Plasmodium parasitemia and reticulocytemia.

    PubMed

    Lim, Caeul; Pereira, Ligia; Shardul, Pritish; Mascarenhas, Anjali; Maki, Jennifer; Rixon, Jordan; Shaw-Saliba, Kathryn; White, John; Silveira, Maria; Gomes, Edwin; Chery, Laura; Rathod, Pradipsinh K; Duraisingh, Manoj T

    2016-08-01

    Even with the advances in molecular or automated methods for detection of red blood cells of interest (such as reticulocytes or parasitized cells), light microscopy continues to be the gold standard especially in laboratories with limited resources. The conventional method for determination of parasitemia and reticulocytemia uses a Miller reticle, a grid with squares of different sizes. However, this method is prone to errors if not used correctly and counts become inaccurate and highly time-consuming at low frequencies of target cells. In this report, we outline the correct guidelines to follow when using a reticle for counting, and present a new counting protocol that is a modified version of the conventional method for increased accuracy in the counting of low parasitemias and reticulocytemias. Am. J. Hematol. 91:852-855, 2016. © 2016 Wiley Periodicals, Inc. PMID:27074559

  16. Three-Signal Method for Accurate Measurements of Depolarization Ratio with Lidar

    NASA Technical Reports Server (NTRS)

    Reichardt, Jens; Baumgart, Rudolf; McGee, Thomsa J.

    2003-01-01

    A method is presented that permits the determination of atmospheric depolarization-ratio profiles from three elastic-backscatter lidar signals with different sensitivity to the state of polarization of the backscattered light. The three-signal method is insensitive to experimental errors and does not require calibration of the measurement, which could cause large systematic uncertainties of the results, as is the case in the lidar technique conventionally used for the observation of depolarization ratios.

  17. Accurate, finite-volume methods for 3D MHD on unstructured Lagrangian meshes

    SciTech Connect

    Barnes, D.C.; Rousculp, C.L.

    1998-10-01

    Previous 2D methods for magnetohydrodynamics (MHD) have contributed both to development of core code capability and to physics applications relevant to AGEX pulsed-power experiments. This strategy is being extended to 3D by development of a modular extension of an ASCI code. Extension to 3D not only increases complexity by problem size, but also introduces new physics, such as magnetic helicity transport. The authors have developed a method which incorporates all known conservation properties into the difference scheme on a Lagrangian unstructured mesh. Because the method does not depend on the mesh structure, mesh refinement is possible during a calculation to prevent the well known problem of mesh tangling. Arbitrary polyhedral cells are decomposed into tetrahedrons. The action of the magnetic vector potential, A {center_dot} {delta}l, is centered on the edges of this extended mesh. For ideal flow, this maintains {del} {center_dot} B = 0 to round-off error. Vertex forces are derived by the variation of magnetic energy with respect to vertex positions, F = {minus}{partial_derivative}W{sub B}/{partial_derivative}r. This assures symmetry as well as magnetic flux, momentum, and energy conservation. The method is local so that parallelization by domain decomposition is natural for large meshes. In addition, a simple, ideal-gas, finite pressure term has been included. The resistive diffusion part is calculated using the support operator method, to obtain an energy conservative, symmetric method on an arbitrary mesh. Implicit time difference equations are solved by preconditioned, conjugate gradient methods. Results of convergence tests are presented. Initial results of an annular Z-pinch implosion problem illustrate the application of these methods to multi-material problems.

  18. A Fast Numerical Method for a Nonlinear Black-Scholes Equation

    NASA Astrophysics Data System (ADS)

    Koleva, Miglena N.; Vulkov, Lubin G.

    2009-11-01

    In this paper we will present an effective numerical method for the Black-Scholes equation with transaction costs for the limiting price u(s, t;a). The technique combines the Rothe method with a two-grid (coarse-fine) algorithm for computation of numerical solutions to initial boundary-values problems to this equation. Numerical experiments for comparison the accuracy ant the computational cost of the method with other known numerical schemes are discussed.

  19. A mass conserving level set method for detailed numerical simulation of liquid atomization

    SciTech Connect

    Luo, Kun; Shao, Changxiao; Yang, Yue; Fan, Jianren

    2015-10-01

    An improved mass conserving level set method for detailed numerical simulations of liquid atomization is developed to address the issue of mass loss in the existing level set method. This method introduces a mass remedy procedure based on the local curvature at the interface, and in principle, can ensure the absolute mass conservation of the liquid phase in the computational domain. Three benchmark cases, including Zalesak's disk, a drop deforming in a vortex field, and the binary drop head-on collision, are simulated to validate the present method, and the excellent agreement with exact solutions or experimental results is achieved. It is shown that the present method is able to capture the complex interface with second-order accuracy and negligible additional computational cost. The present method is then applied to study more complex flows, such as a drop impacting on a liquid film and the swirling liquid sheet atomization, which again, demonstrates the advantages of mass conservation and the capability to represent the interface accurately.

  20. Infants and young children modeling method for numerical dosimetry studies: application to plane wave exposure

    NASA Astrophysics Data System (ADS)

    Dahdouh, S.; Varsier, N.; Nunez Ochoa, M. A.; Wiart, J.; Peyman, A.; Bloch, I.

    2016-02-01

    Numerical dosimetry studies require the development of accurate numerical 3D models of the human body. This paper proposes a novel method for building 3D heterogeneous young children models combining results obtained from a semi-automatic multi-organ segmentation algorithm and an anatomy deformation method. The data consist of 3D magnetic resonance images, which are first segmented to obtain a set of initial tissues. A deformation procedure guided by the segmentation results is then developed in order to obtain five young children models ranging from the age of 5 to 37 months. By constraining the deformation of an older child model toward a younger one using segmentation results, we assure the anatomical realism of the models. Using the proposed framework, five models, containing thirteen tissues, are built. Three of these models are used in a prospective dosimetry study to analyze young child exposure to radiofrequency electromagnetic fields. The results lean to show the existence of a relationship between age and whole body exposure. The results also highlight the necessity to specifically study and develop measurements of child tissues dielectric properties.

  1. Infants and young children modeling method for numerical dosimetry studies: application to plane wave exposure.

    PubMed

    Dahdouh, S; Varsier, N; Nunez Ochoa, M A; Wiart, J; Peyman, A; Bloch, I

    2016-02-21

    Numerical dosimetry studies require the development of accurate numerical 3D models of the human body. This paper proposes a novel method for building 3D heterogeneous young children models combining results obtained from a semi-automatic multi-organ segmentation algorithm and an anatomy deformation method. The data consist of 3D magnetic resonance images, which are first segmented to obtain a set of initial tissues. A deformation procedure guided by the segmentation results is then developed in order to obtain five young children models ranging from the age of 5 to 37 months. By constraining the deformation of an older child model toward a younger one using segmentation results, we assure the anatomical realism of the models. Using the proposed framework, five models, containing thirteen tissues, are built. Three of these models are used in a prospective dosimetry study to analyze young child exposure to radiofrequency electromagnetic fields. The results lean to show the existence of a relationship between age and whole body exposure. The results also highlight the necessity to specifically study and develop measurements of child tissues dielectric properties. PMID:26815765

  2. Numerical Analysis on the Vortex Pattern and Flux Particle Dispersion in KR Method Using MPS Method

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Xu, Y.; Anzai, K.

    2015-06-01

    The mechanically-stirring vessel is widely used in many fields, such as chemical reactor, bioreactor, and metallurgy, etc. The type of vortex mode that formed during impeller stirring has great effect on stirring efficiency, chemical reacting rate and air entrapment. Many efforts have been made to numerically simulate the fluid flow in the stirring vessel with classical Eulerian method. However, it is difficult to directly investigate the vortex mode and flux particle dispersion. Therefore, moving particle semi-implicit (MPS) method, which is based on Lagrangian method, is applied to simulate the fluid flow in a KR method in this practice. Top height and bottom heights of vortex surface in a steady state under several rotation speed was taken as key parameters to compare the results of numerical and published results. Flux particle dispersion behaviour under a rotation speed range from 80 to 480 rpm was also compared with the past study. The result shows that the numerical calculation has high consistency with experimental results. It is confirmed that the calculation using MPS method well reflected the vortex mode and flux particle dispersion in a mechanically-stirring vessel.

  3. SAMSAN- MODERN NUMERICAL METHODS FOR CLASSICAL SAMPLED SYSTEM ANALYSIS

    NASA Technical Reports Server (NTRS)

    Frisch, H. P.

    1994-01-01

    SAMSAN was developed to aid the control system analyst by providing a self consistent set of computer algorithms that support large order control system design and evaluation studies, with an emphasis placed on sampled system analysis. Control system analysts have access to a vast array of published algorithms to solve an equally large spectrum of controls related computational problems. The analyst usually spends considerable time and effort bringing these published algorithms to an integrated operational status and often finds them less general than desired. SAMSAN reduces the burden on the analyst by providing a set of algorithms that have been well tested and documented, and that can be readily integrated for solving control system problems. Algorithm selection for SAMSAN has been biased toward numerical accuracy for large order systems with computational speed and portability being considered important but not paramount. In addition to containing relevant subroutines from EISPAK for eigen-analysis and from LINPAK for the solution of linear systems and related problems, SAMSAN contains the following not so generally available capabilities: 1) Reduction of a real non-symmetric matrix to block diagonal form via a real similarity transformation matrix which is well conditioned with respect to inversion, 2) Solution of the generalized eigenvalue problem with balancing and grading, 3) Computation of all zeros of the determinant of a matrix of polynomials, 4) Matrix exponentiation and the evaluation of integrals involving the matrix exponential, with option to first block diagonalize, 5) Root locus and frequency response for single variable transfer functions in the S, Z, and W domains, 6) Several methods of computing zeros for linear systems, and 7) The ability to generate documentation "on demand". All matrix operations in the SAMSAN algorithms assume non-symmetric matrices with real double precision elements. There is no fixed size limit on any matrix in any

  4. Application of combined rigid choledochoscope and accurate positioning method in the adjuvant treatment of bile duct stones

    PubMed Central

    Wang, Ping; Chen, Xiaowu; Sun, Beiwang; Liu, Yanmin

    2015-01-01

    To explore the clinical effect of percutaneous transhepatic cholangioscopic lithotomy (PTCSL) combined with rigid choledochoscope and accurate positioning in the treatment of calculus of bile duct. This study retrospectively reviewed 162 patients with hepatolithiasis at the First Affiliated Hospital of Guangzhou Medical University between 2001 and 2013 were assigned to hard lens group or traditional PTCSL group. Compared with the traditional PTCSL, PTCSL with rigid choledochoscope can shorten the interval time which limit the PTCSL application. The operation time (45 vs 78, P=0.003), the number of operation (1.62 vs 1.97, P=0.031), and blood loss (37.8 vs 55.1, P=0.022) were better in hard lens group while the stone residual and complication had no significant differences. Rigid choledochoscope is a safe, minimally invasive and effective method in the treatment of bile duct stones. Accurate positioning method can effectively shorten operation process time. PMID:26629183

  5. An accurate and efficient computation method of the hydration free energy of a large, complex molecule

    NASA Astrophysics Data System (ADS)

    Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori

    2015-05-01

    The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of /2 ( is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load.

  6. Accurate, finite-volume methods for three dimensional magneto-hydrodynamics on Lagrangian meshes

    SciTech Connect

    Rousculp, C.L.; Barnes, D.C.

    1999-07-01

    Recently developed algorithms for ideal and resistive, 3D MHD calculations on Lagrangian hexahedral meshes have been generalized to work with a lagrangian mesh composed of arbitrary polyhedral cells. this allows for mesh refinement during a calculation to prevent the well known problem of tangling in a Lagrangian mesh. Arbitrary polyhedral cells are decomposed into tetrahedrons. The action of the magnetic vector potential, A {sm_bullet} {delta}1, is centered on all faces edges of this extended mesh. Thus, {triangledown} {sm_bullet} B = 0 is maintained to round-off error. For ideal flow, (E = v x B), vertex forces are derived by the variation of magnetic energy with respect to vertex positions, F = {minus}{partial_derivative}W{sub B}/{partial_derivative}r. This assures symmetry as well as magnetic flux, momentum, and energy conservation. The method is local so that parallelization by domain decomposition is natural for large meshes. In addition, a simple, ideal-gas, finite pressure term has been included. The resistive diffusion, (E = {minus}{eta}J), is treated with a support operator method, to obtain an energy conservative, symmetric method on an arbitrary polyhedral mesh. The equation of motion is time-step-split. First, the ideal term is treated explicitly. Next, the diffusion is solved implicitly with a preconditioned conjugate gradient method. Results of convergence tests are presented. Initial results of an annular Z-pinch implosion problem illustrate the application of these methods to multi-material problems.

  7. More accurate matrix-matched quantification using standard superposition method for herbal medicines.

    PubMed

    Liu, Ying; Shi, Xiao-Wei; Liu, E-Hu; Sheng, Long-Sheng; Qi, Lian-Wen; Li, Ping

    2012-09-01

    Various analytical technologies have been developed for quantitative determination of marker compounds in herbal medicines (HMs). One important issue is matrix effects that must be addressed in method validation for different detections. Unlike biological fluids, blank matrix samples for calibration are usually unavailable for HMs. In this work, practical approaches for minimizing matrix effects in HMs analysis were proposed. The matrix effects in quantitative analysis of five saponins from Panax notoginseng were assessed using high-performance liquid chromatography (HPLC). Matrix components were found to interfere with the ionization of target analytes when mass spectrometry (MS) detection were employed. To compensate the matrix signal suppression/enhancement, two matrix-matched methods, standard addition method with the target-knockout extract and standard superposition method with a HM extract were developed and tested in this work. The results showed that the standard superposition method is simple and practical for overcoming matrix effects for quantitative analysis of HMs. Moreover, the interference components were observed to interfere with light scattering of target analytes when evaporative light scattering detection (ELSD) was utilized for quantitative analysis of HMs but was not indicated when Ultraviolet detection (UV) were employed. Thus, the issue of interference effects should be addressed and minimized for quantitative HPLC-ELSD and HPLC-MS methodologies for quality control of HMs. PMID:22835696

  8. Accurate VoF based curvature evaluation method for low-resolution interface geometries

    NASA Astrophysics Data System (ADS)

    Owkes, Mark; Herrmann, Marcus; Desjardins, Olivier

    2014-11-01

    The height function method is a common approach to compute the curvature of a gas-liquid interface in the context of the volume-of-fluid method. While the approach has been shown to produce second-order curvature estimates for many interfaces, the height function method deteriorates when the curvature becomes large and the interface becomes under-resolved by the computational mesh. In this work, we propose a modification to the height function method that improves the curvature calculation for under-resolved structures. The proposed scheme computes heights within columns that are not aligned with the underlying computational mesh but rather the interface normal vector which are found to be more robust for under-resolved interfaces. A computational geometry toolbox is used to compute the heights in the complex geometry that is formed at the intersection of the computational mesh and the columns. The resulting scheme has significantly reduced curvature errors for under-resolved interfaces and recovers the second-order convergence of the standard height function method for well-resolved interfaces.

  9. More accurate predictions with transonic Navier-Stokes methods through improved turbulence modeling

    NASA Technical Reports Server (NTRS)

    Johnson, Dennis A.

    1989-01-01

    Significant improvements in predictive accuracies for off-design conditions are achievable through better turbulence modeling; and, without necessarily adding any significant complication to the numerics. One well established fact about turbulence is it is slow to respond to changes in the mean strain field. With the 'equilibrium' algebraic turbulence models no attempt is made to model this characteristic and as a consequence these turbulence models exaggerate the turbulent boundary layer's ability to produce turbulent Reynolds shear stresses in regions of adverse pressure gradient. As a consequence, too little momentum loss within the boundary layer is predicted in the region of the shock wave and along the aft part of the airfoil where the surface pressure undergoes further increases. Recently, a 'nonequilibrium' algebraic turbulence model was formulated which attempts to capture this important characteristic of turbulence. This 'nonequilibrium' algebraic model employs an ordinary differential equation to model the slow response of the turbulence to changes in local flow conditions. In its original form, there was some question as to whether this 'nonequilibrium' model performed as well as the 'equilibrium' models for weak interaction cases. However, this turbulence model has since been further improved wherein it now appears that this turbulence model performs at least as well as the 'equilibrium' models for weak interaction cases and for strong interaction cases represents a very significant improvement. The performance of this turbulence model relative to popular 'equilibrium' models is illustrated for three airfoil test cases of the 1987 AIAA Viscous Transonic Airfoil Workshop, Reno, Nevada. A form of this 'nonequilibrium' turbulence model is currently being applied to wing flows for which similar improvements in predictive accuracy are being realized.

  10. Simple, Precise and Accurate HPLC Method of Analysis for Nevirapine Suspension from Human Plasma

    PubMed Central

    Halde, S.; Mungantiwar, A.; Chintamaneni, M.

    2011-01-01

    A selective and sensitive high performance liquid chromatography with UV detector (HPLC-UV) method was developed and validated from human plasma. Nevirapine and internal standard (IS) zidovudine were extracted from human plasma by liquid-liquid extraction process using methyl tert-butyl ether. The samples were analysed using Inertsil ODS 3, 250×4.6 mm, 5 μ column using a mobile phase consists of 50 mM sodium acetate buffer solution (pH-4.00±0.05): acetonitrile (73:27 v/v). The method was validated over a concentration range of 50.00 ng/ml to 3998.96 ng/ml. The method was successfully applied to bioequivalence study of 10 ml single dose nevirapine oral suspension 50 mg/5 ml in healthy male volunteers. PMID:22707826

  11. Numerical Solution of Poroelastic Wave Equation Using Nodal Discontinuous Galerkin Finite Element Method

    NASA Astrophysics Data System (ADS)

    Shukla, K.; Wang, Y.; Jaiswal, P.

    2014-12-01

    In a porous medium the seismic energy not only propagates through matrix but also through pore-fluids. The differential movement between sediment grains of the matrix and interstitial fluid generates a diffusive wave which is commonly referred to as the slow P-wave. A combined system of equation which includes both elastic and diffusive phases is known as the poroelasticity. Analyzing seismic data through poroelastic modeling results in accurate interpretation of amplitude and separation of wave modes, leading to more accurate estimation of geomehanical properties of rocks. Despite its obvious multi-scale application, from sedimentary reservoir characterization to deep-earth fractured crust, poroelasticity remains under-developed primarily due to the complex nature of its constituent equations. We present a detail formulation of poroleastic wave equations for isotropic media by combining the Biot's and Newtonian mechanics. System of poroelastic wave equation constitutes for eight time dependent hyperbolic PDEs in 2D whereas in case of 3D number goes up to thirteen. Eigen decomposition of Jacobian of these systems confirms the presence of an additional slow-P wave phase with velocity lower than shear wave, posing stability issues on numerical scheme. To circumvent the issue, we derived a numerical scheme using nodal discontinuous Galerkin approach by adopting the triangular meshes in 2D which is extended to tetrahedral for 3D problems. In our nodal DG approach the basis function over a triangular element is interpolated using Legendre-Gauss-Lobatto (LGL) function leading to a more accurate local solutions than in the case of simple DG. We have tested the numerical scheme for poroelastic media in 1D and 2D case, and solution obtained for the systems offers high accuracy in results over other methods such as finite difference , finite volume and pseudo-spectral. The nodal nature of our approach makes it easy to convert the application into a multi-threaded algorithm

  12. Highly effective and accurate weak point monitoring method for advanced design rule (1x nm) devices

    NASA Astrophysics Data System (ADS)

    Ahn, Jeongho; Seong, ShiJin; Yoon, Minjung; Park, Il-Suk; Kim, HyungSeop; Ihm, Dongchul; Chin, Soobok; Sivaraman, Gangadharan; Li, Mingwei; Babulnath, Raghav; Lee, Chang Ho; Kurada, Satya; Brown, Christine; Galani, Rajiv; Kim, JaeHyun

    2014-04-01

    Historically when we used to manufacture semiconductor devices for 45 nm or above design rules, IC manufacturing yield was mainly determined by global random variations and therefore the chip manufacturers / manufacturing team were mainly responsible for yield improvement. With the introduction of sub-45 nm semiconductor technologies, yield started to be dominated by systematic variations, primarily centered on resolution problems, copper/low-k interconnects and CMP. These local systematic variations, which have become decisively greater than global random variations, are design-dependent [1, 2] and therefore designers now share the responsibility of increasing yield with manufacturers / manufacturing teams. A widening manufacturing gap has led to a dramatic increase in design rules that are either too restrictive or do not guarantee a litho/etch hotspot-free design. The semiconductor industry is currently limited to 193 nm scanners and no relief is expected from the equipment side to prevent / eliminate these systematic hotspots. Hence we have seen a lot of design houses coming up with innovative design products to check hotspots based on model based lithography checks to validate design manufacturability, which will also account for complex two-dimensional effects that stem from aggressive scaling of 193 nm lithography. Most of these hotspots (a.k.a., weak points) are especially seen on Back End of the Line (BEOL) process levels like Mx ADI, Mx Etch and Mx CMP. Inspecting some of these BEOL levels can be extremely challenging as there are lots of wafer noises or nuisances that can hinder an inspector's ability to detect and monitor the defects or weak points of interest. In this work we have attempted to accurately inspect the weak points using a novel broadband plasma optical inspection approach that enhances defect signal from patterns of interest (POI) and precisely suppresses surrounding wafer noises. This new approach is a paradigm shift in wafer inspection

  13. A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates

    NASA Astrophysics Data System (ADS)

    Savanevych, V. E.; Briukhovetskyi, O. B.; Sokovikova, N. S.; Bezkrovny, M. M.; Vavilova, I. B.; Ivashchenko, Yu. M.; Elenin, L. V.; Khlamov, S. V.; Movsesian, Ia. S.; Dashkova, A. M.; Pogorelov, A. V.

    2015-08-01

    We describe a new iteration method to estimate asteroid coordinates, based on a subpixel Gaussian model of the discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixel potentials) of the CCD frame. In this model, the kind of coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The method that is developed, which is flexible in adapting to any form of object image, has a high measurement accuracy along with a low calculating complexity, due to the maximum-likelihood procedure that is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for minimization of the quadratic form. Since 2010, the method has been tested as the basis of our Collection Light Technology (COLITEC) software, which has been installed at several observatories across the world with the aim of the automatic discovery of asteroids and comets in sets of CCD frames. As a result, four comets (C/2010 X1 (Elenin), P/2011 NO1(Elenin), C/2012 S1 (ISON) and P/2013 V3 (Nevski)) as well as more than 1500 small Solar system bodies (including five near-Earth objects (NEOs), 21 Trojan asteroids of Jupiter and one Centaur object) have been discovered. We discuss these results, which allowed us to compare the accuracy parameters of the new method and confirm its efficiency. In 2014, the COLITEC software was recommended to all members of the Gaia-FUN-SSO network for analysing observations as a tool to detect faint moving objects in frames.

  14. Which Method Is Most Precise; Which Is Most Accurate? An Undergraduate Experiment

    ERIC Educational Resources Information Center

    Jordan, A. D.

    2007-01-01

    A simple experiment, the determination of the density of a liquid by several methods, is presented. Since the concept of density is a familiar one, the experiment is suitable for the introductory laboratory period of a first- or second-year course in physical or analytical chemistry. The main objective of the experiment is to familiarize students…

  15. Accurate analytical method for the extraction of solar cell model parameters

    NASA Astrophysics Data System (ADS)

    Phang, J. C. H.; Chan, D. S. H.; Phillips, J. R.

    1984-05-01

    Single diode solar cell model parameters are rapidly extracted from experimental data by means of the presently derived analytical expressions. The parameter values obtained have a less than 5 percent error for most solar cells, in light of the extraction of model parameters for two cells of differing quality which were compared with parameters extracted by means of the iterative method.

  16. Accurate motion parameter estimation for colonoscopy tracking using a regression method

    NASA Astrophysics Data System (ADS)

    Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.

    2010-03-01

    Co-located optical and virtual colonoscopy images have the potential to provide important clinical information during routine colonoscopy procedures. In our earlier work, we presented an optical flow based algorithm to compute egomotion from live colonoscopy video, permitting navigation and visualization of the corresponding patient anatomy. In the original algorithm, motion parameters were estimated using the traditional Least Sum of squares(LS) procedure which can be unstable in the context of optical flow vectors with large errors. In the improved algorithm, we use the Least Median of Squares (LMS) method, a robust regression method for motion parameter estimation. Using the LMS method, we iteratively analyze and converge toward the main distribution of the flow vectors, while disregarding outliers. We show through three experiments the improvement in tracking results obtained using the LMS method, in comparison to the LS estimator. The first experiment demonstrates better spatial accuracy in positioning the virtual camera in the sigmoid colon. The second and third experiments demonstrate the robustness of this estimator, resulting in longer tracked sequences: from 300 to 1310 in the ascending colon, and 410 to 1316 in the transverse colon.

  17. Quantitative calcium resistivity based method for accurate and scalable water vapor transmission rate measurement.

    PubMed

    Reese, Matthew O; Dameron, Arrelaine A; Kempe, Michael D

    2011-08-01

    The development of flexible organic light emitting diode displays and flexible thin film photovoltaic devices is dependent on the use of flexible, low-cost, optically transparent and durable barriers to moisture and/or oxygen. It is estimated that this will require high moisture barriers with water vapor transmission rates (WVTR) between 10(-4) and 10(-6) g/m(2)/day. Thus there is a need to develop a relatively fast, low-cost, and quantitative method to evaluate such low permeation rates. Here, we demonstrate a method where the resistance changes of patterned Ca films, upon reaction with moisture, enable one to calculate a WVTR between 10 and 10(-6) g/m(2)/day or better. Samples are configured with variable aperture size such that the sensitivity and/or measurement time of the experiment can be controlled. The samples are connected to a data acquisition system by means of individual signal cables permitting samples to be tested under a variety of conditions in multiple environmental chambers. An edge card connector is used to connect samples to the measurement wires enabling easy switching of samples in and out of test. This measurement method can be conducted with as little as 1 h of labor time per sample. Furthermore, multiple samples can be measured in parallel, making this an inexpensive and high volume method for measuring high moisture barriers. PMID:21895269

  18. A Robust Method of Vehicle Stability Accurate Measurement Using GPS and INS

    NASA Astrophysics Data System (ADS)

    Miao, Zhibin; Zhang, Hongtian; Zhang, Jinzhu

    2015-12-01

    With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. Integration of Global Positioning System (GPS) and Inertial Navigation System (INS) is a very practical method to get high-precision measurement data. Usually, the Kalman filter is used to fuse the data from GPS and INS. In this paper, a robust method is used to measure vehicle sideslip angle and yaw rate, which are two important parameters for vehicle stability. First, a four-wheel vehicle dynamic model is introduced, based on sideslip angle and yaw rate. Second, a double level Kalman filter is established to fuse the data from Global Positioning System and Inertial Navigation System. Then, this method is simulated on a sample vehicle, using Carsim software to test the sideslip angle and yaw rate. Finally, a real experiment is made to verify the advantage of this approach. The experimental results showed the merits of this method of measurement and estimation, and the approach can meet the design requirements of the vehicle stability controller.

  19. Collision-induced fragmentation accurate mass spectrometric analysis methods to rapidly characterize phytochemicals in plant extracts

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The rapid advances in analytical chromatography equipment have made the reliable and reproducible measurement of a wide range of plant chemical components possible. Full chemical characterization of a given plant material is possible with the new mass spectrometers currently available. New methods a...

  20. A FIB-nanotomography method for accurate 3D reconstruction of open nanoporous structures.

    PubMed

    Mangipudi, K R; Radisch, V; Holzer, L; Volkert, C A

    2016-04-01

    We present an automated focused ion beam nanotomography method for nanoporous microstructures with open porosity, and apply it to reconstruct nanoporous gold (np-Au) structures with ligament sizes on the order of a few tens of nanometers. This method uses serial sectioning of a well-defined wedge-shaped geometry to determine the thickness of individual slices from the changes in the sample width in successive cross-sectional images. The pore space of a selected region of the np-Au is infiltrated with ion-beam-deposited Pt composite before serial sectioning. The cross-sectional images are binarized and stacked according to the individual slice thicknesses, and then processed using standard reconstruction methods. For the image conditions and sample geometry used here, we are able to determine the thickness of individual slices with an accuracy much smaller than a pixel. The accuracy of the new method based on actual slice thickness is assessed by comparing it with (i) a reconstruction using the same cross-sectional images but assuming a constant slice thickness, and (ii) a reconstruction using traditional FIB-tomography method employing constant slice thickness. The morphology and topology of the structures are characterized using ligament and pore size distributions, interface shape distribution functions, interface normal distributions, and genus. The results suggest that the morphology and topology of the final reconstructions are significantly influenced when a constant slice thickness is assumed. The study reveals grain-to-grain variations in the morphology and topology of np-Au. PMID:26906523

  1. Numerical Analysis of Hydrodynamics for Bionic Oscillating Hydrofoil Based on Panel Method

    PubMed Central

    2016-01-01

    The kinematics model based on the Slender-Body theory is proposed from the bionic movement of real fish. The Panel method is applied to the hydrodynamic performance analysis innovatively, with the Gauss-Seidel method to solve the Navier-Stokes equations additionally, to evaluate the flexible deformation of fish in swimming accurately when satisfying the boundary conditions. A physical prototype to mimic the shape of tuna is developed with the revolutionized technology of rapid prototyping manufacturing. The hydrodynamic performance for rigid oscillating hydrofoil is analyzed with the proposed method, and it shows good coherence with the cases analyzed by the commercial software Fluent and the experimental data from robofish. Furthermore, the hydrodynamic performance of coupled hydrofoil, which consisted of flexible fish body and rigid caudal fin, is analyzed with the proposed method. It shows that the caudal fin has great influence on trailing vortex shedding and the phase angle is the key factor on hydrodynamic performance. It is verified that the shape of trailing vortex is similar to the image of the motion curve at the trailing edge as the assumption of linear vortex plane under the condition of small downwash velocity. The numerical analysis of hydrodynamics for bionic movement based on the Panel method has certain value to reveal the fish swimming mechanism. PMID:27578959

  2. A comparison of methods to estimate seismic phase delays: numerical examples for coda wave interferometry

    NASA Astrophysics Data System (ADS)

    Mikesell, T. Dylan; Malcolm, Alison E.; Yang, Di; Haney, Matthew M.

    2015-07-01

    Time-shift estimation between arrivals in two seismic traces before and after a velocity perturbation is a crucial step in many seismic methods. The accuracy of the estimated velocity perturbation location and amplitude depend on this time shift. Windowed cross-correlation and trace stretching are two techniques commonly used to estimate local time shifts in seismic signals. In the work presented here we implement Dynamic Time Warping (DTW) to estimate the warping function - a vector of local time shifts that globally minimizes the misfit between two seismic traces. We compare all three methods using acoustic numerical experiments. We show that DTW is comparable to or better than the other two methods when the velocity perturbation is homogeneous and the signal-to-noise ratio is high. When the signal-to-noise ratio is low, we find that DTW and windowed cross-correlation are more accurate than the stretching method. Finally, we show that the DTW algorithm has good time resolution when identifying small differences in the seismic traces for a model with an isolated velocity perturbation. These results impact current methods that utilize not only time shifts between (multiply) scattered waves, but also amplitude and decoherence measurements.

  3. Numerical Analysis of Hydrodynamics for Bionic Oscillating Hydrofoil Based on Panel Method.

    PubMed

    Xue, Gang; Liu, Yanjun; Zhang, Muqun; Ding, Hongpeng

    2016-01-01

    The kinematics model based on the Slender-Body theory is proposed from the bionic movement of real fish. The Panel method is applied to the hydrodynamic performance analysis innovatively, with the Gauss-Seidel method to solve the Navier-Stokes equations additionally, to evaluate the flexible deformation of fish in swimming accurately when satisfying the boundary conditions. A physical prototype to mimic the shape of tuna is developed with the revolutionized technology of rapid prototyping manufacturing. The hydrodynamic performance for rigid oscillating hydrofoil is analyzed with the proposed method, and it shows good coherence with the cases analyzed by the commercial software Fluent and the experimental data from robofish. Furthermore, the hydrodynamic performance of coupled hydrofoil, which consisted of flexible fish body and rigid caudal fin, is analyzed with the proposed method. It shows that the caudal fin has great influence on trailing vortex shedding and the phase angle is the key factor on hydrodynamic performance. It is verified that the shape of trailing vortex is similar to the image of the motion curve at the trailing edge as the assumption of linear vortex plane under the condition of small downwash velocity. The numerical analysis of hydrodynamics for bionic movement based on the Panel method has certain value to reveal the fish swimming mechanism. PMID:27578959

  4. Implicit spectrally-accurate method for moving boundary problems using immersed boundary conditions concept

    NASA Astrophysics Data System (ADS)

    Husain, S. Z.; Floryan, J. M.

    2008-04-01

    A fully implicit, spectral algorithm for the analysis of moving boundary problem is described. The algorithm is based on the concept of immersed boundary conditions (IBC), i.e., the computational domain is fixed while the time dependent physical domain is submerged inside the computational domain, and is described in the context of the diffusion-type problems. The physical conditions along the edges of the physical domain are treated as internal constraints. The method eliminates the need for adaptive grid generation that follows evolution of the physical domain and provides sharp resolution of the location of the boundary. Various tests confirm the spectral accuracy in space and the first- and second-order accuracy in time. The computational cost advantage of the IBC method as compared with the more traditional algorithm based on the mapping concept is demonstrated.

  5. A rapid and accurate method for calculation of stratospheric photolysis rates with molecular scattering

    NASA Technical Reports Server (NTRS)

    Boughner, Robert E.

    1986-01-01

    A method for calculating the photodissociation rates needed for photochemical modeling of the stratosphere, which includes the effects of molecular scattering, is described. The procedure is based on Sokolov's method of averaging functional correction. The radiation model and approximations used to calculate the radiation field are examined. The approximated diffuse fields and photolysis rates are compared with exact data. It is observed that the approximate solutions differ from the exact result by 10 percent or less at altitudes above 15 km; the photolysis rates differ from the exact rates by less than 5 percent for altitudes above 10 km and all zenith angles, and by less than 1 percent for altitudes above 15 km.

  6. Computer-implemented system and method for automated and highly accurate plaque analysis, reporting, and visualization

    NASA Technical Reports Server (NTRS)

    Kemp, James Herbert (Inventor); Talukder, Ashit (Inventor); Lambert, James (Inventor); Lam, Raymond (Inventor)

    2008-01-01

    A computer-implemented system and method of intra-oral analysis for measuring plaque removal is disclosed. The system includes hardware for real-time image acquisition and software to store the acquired images on a patient-by-patient basis. The system implements algorithms to segment teeth of interest from surrounding gum, and uses a real-time image-based morphing procedure to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The system integrates these components into a single software suite with an easy-to-use graphical user interface (GUI) that allows users to do an end-to-end run of a patient record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image.

  7. A Variable Coefficient Method for Accurate Monte Carlo Simulation of Dynamic Asset Price

    NASA Astrophysics Data System (ADS)

    Li, Yiming; Hung, Chih-Young; Yu, Shao-Ming; Chiang, Su-Yun; Chiang, Yi-Hui; Cheng, Hui-Wen

    2007-07-01

    In this work, we propose an adaptive Monte Carlo (MC) simulation technique to compute the sample paths for the dynamical asset price. In contrast to conventional MC simulation with constant drift and volatility (μ,σ), our MC simulation is performed with variable coefficient methods for (μ,σ) in the solution scheme, where the explored dynamic asset pricing model starts from the formulation of geometric Brownian motion. With the method of simultaneously updated (μ,σ), more than 5,000 runs of MC simulation are performed to fulfills basic accuracy of the large-scale computation and suppresses statistical variance. Daily changes of stock market index in Taiwan and Japan are investigated and analyzed.

  8. Exact kinetic energy enables accurate evaluation of weak interactions by the FDE-vdW method

    SciTech Connect

    Sinha, Debalina; Pavanello, Michele

    2015-08-28

    The correlation energy of interaction is an elusive and sought-after interaction between molecular systems. By partitioning the response function of the system into subsystem contributions, the Frozen Density Embedding (FDE)-vdW method provides a computationally amenable nonlocal correlation functional based on the adiabatic connection fluctuation dissipation theorem applied to subsystem density functional theory. In reproducing potential energy surfaces of weakly interacting dimers, we show that FDE-vdW, either employing semilocal or exact nonadditive kinetic energy functionals, is in quantitative agreement with high-accuracy coupled cluster calculations (overall mean unsigned error of 0.5 kcal/mol). When employing the exact kinetic energy (which we term the Kohn-Sham (KS)-vdW method), the binding energies are generally closer to the benchmark, and the energy surfaces are also smoother.

  9. EEMD based pitch evaluation method for accurate grating measurement by AFM

    NASA Astrophysics Data System (ADS)

    Li, Changsheng; Yang, Shuming; Wang, Chenying; Jiang, Zhuangde

    2016-09-01

    The pitch measurement and AFM calibration precision are significantly influenced by the grating pitch evaluation method. This paper presents the ensemble empirical mode decomposition (EEMD) based pitch evaluation method to relieve the accuracy deterioration caused by high and low frequency components of scanning profile during pitch evaluation. The simulation analysis shows that the application of EEMD can improve the pitch accuracy of the FFT-FT algorithm. The pitch error is small when the iteration number of the FFT-FT algorithms was 8. The AFM measurement of the 500 nm-pitch one-dimensional grating shows that the EEMD based pitch evaluation method could improve the pitch precision, especially the grating line position precision, and greatly expand the applicability of the gravity center algorithm when particles and impression marks were distributed on the sample surface. The measurement indicates that the nonlinearity was stable, and the nonlinearity of x axis and forward scanning was much smaller than their counterpart. Finally, a detailed pitch measurement uncertainty evaluation model suitable for commercial AFMs was demonstrated and a pitch uncertainty in the sub-nanometer range was achieved. The pitch uncertainty was reduced about 10% by EEMD.

  10. Generalization of the time-dependent numerical renormalization group method to finite temperatures and general pulses

    NASA Astrophysics Data System (ADS)

    Nghiem, H. T. M.; Costi, T. A.

    2014-02-01

    The time-dependent numerical renormalization group (TDNRG) method [Anders et al., Phys. Rev. Lett. 95, 196801 (2005), 10.1103/PhysRevLett.95.196801] offers the prospect of investigating in a nonperturbative manner the time dependence of local observables of interacting quantum impurity models at all time scales following a quantum quench. Here, we present a generalization of this method to arbitrary finite temperature by making use of the full density matrix approach [Weichselbaum et al., Phys. Rev. Lett. 99, 076402 (2007), 10.1103/PhysRevLett.99.076402]. We show that all terms in the projected full density matrix ρi →f=ρ+++ρ--+ρ+-+ρ-+ appearing in the time evolution of a local observable may be evaluated in closed form at finite temperature, with ρ+-=ρ-+=0. The expression for ρ-- is shown to be finite at finite temperature, becoming negligible only in the limit of vanishing temperatures. We prove that this approach recovers the short-time limit for the expectation value of a local observable exactly at arbitrary temperatures. In contrast, the corresponding long-time limit is recovered exactly only for a continuous bath, i.e., when the logarithmic discretization parameter Λ →1+. Since the numerical renormalization group approach breaks down in this limit, and calculations have to be carried out at Λ >1, the long-time behavior following an arbitrary quantum quench has a finite error, which poses an obstacle for the method, e.g., in its application to the scattering-states numerical renormalization group method for describing steady-state nonequilibrium transport through correlated impurities [Anders, Phys. Rev. Lett. 101, 066804 (2008), 10.1103/PhysRevLett.101.066804]. We suggest a way to overcome this problem by noting that the time dependence, in general, and the long-time limit, in particular, become increasingly more accurate on reducing the size of the quantum quench. This suggests an improved generalized TDNRG approach in which the system is time

  11. Accurate and quick calibration method for polarization-modulation spectroscopy using an ac-modulated polarizing undulator

    SciTech Connect

    Tanaka, Masahito; Yagi-Watanabe, Kazutoshi; Kaneko, Fusae; Nakagawa, Kazumichi

    2008-08-15

    An accurate calibration method in which an ac-modulated polarizing undulator is used for polarization modulation spectroscopy such as circular dichroism (CD) and linear dichroism (LD) has been proposed and successfully applied to vacuum ultraviolet (vuv) CD and LD spectra measured at beamline BL-5B in the electron storage ring, TERAS, at AIST. This calibration method employs an undulator-modulation spectroscopic method with a multireflection polarimeter, and it uses electronic and optical elements identical to those used for the CD and LD measurements. This method regards the polarimeter as a standard sample for the CD and LD measurements in the vuv region in which a standard sample has not yet been established. The calibration factors for the CD and LD spectra are obtained over a wide range of wavelengths, from 120 to 230 nm, at TERAS BL-5B. The calibrated CD and LD spectra measured at TERAS exhibit good agreement with the standard spectra for wavelengths greater than 170 nm; the mean differences between the standard and calibrated CD and LD spectra are approximately 7% and 4%, respectively. This method enables a remarkable reduction in the experimental time, from approximately 1 h to less than 10 min that is sufficient to observe the storage-ring current dependence of the calibration factors. This method can be applied to the calibration of vuv-CD spectra measured using a conventional photoelastic modulator and for performing an accurate analysis of protein secondary structures.

  12. Accurate vibrational frequencies using the self-consistent-charge density-functional tight-binding method

    NASA Astrophysics Data System (ADS)

    Małolepsza, Edyta; Witek, Henryk A.; Morokuma, Keiji

    2005-09-01

    An optimization technique for enhancing the quality of repulsive two-body potentials of the self-consistent-charge density-functional tight-binding (SCC-DFTB) method is presented and tested. The new, optimized potentials allow for significant improvement of calculated harmonic vibrational frequencies. Mean absolute deviation from experiment computed for a group of 14 hydrocarbons is reduced from 59.0 to 33.2 cm -1 and maximal absolute deviation, from 436.2 to 140.4 cm -1. A drawback of the new family of potentials is a lower quality of reproduced geometrical and energetic parameters.

  13. High-order accurate difference schemes for solving gasdynamic equations by the Godunov method with antidiffusion

    NASA Astrophysics Data System (ADS)

    Moiseev, N. Ya.; Silant'eva, I. Yu.

    2009-05-01

    A technique is proposed for improving the accuracy of the Godunov method as applied to gasdynamic simulations in one dimension. The underlying idea is the reconstruction of fluxes arsoss cell boundaries (“large” values) by using antidiffusion corrections, which are obtained by analyzing the differential approximation of the schemes. In contrast to other approaches, the reconstructed values are not the initial data but rather large values calculated by solving the Riemann problem. The approach is efficient and yields higher accuracy difference schemes with a high resolution.

  14. A novel method for more accurately mapping the surface temperature of ultrasonic transducers.

    PubMed

    Axell, Richard G; Hopper, Richard H; Jarritt, Peter H; Oxley, Chris H

    2011-10-01

    This paper introduces a novel method for measuring the surface temperature of ultrasound transducer membranes and compares it with two standard measurement techniques. The surface temperature rise was measured as defined in the IEC Standard 60601-2-37. The measurement techniques were (i) thermocouple, (ii) thermal camera and (iii) novel infra-red (IR) "micro-sensor." Peak transducer surface measurements taken with the thermocouple and thermal camera were -3.7 ± 0.7 (95% CI)°C and -4.3 ± 1.8 (95% CI)°C, respectively, within the limits of the IEC Standard. Measurements taken with the novel IR micro-sensor exceeded these limits by 3.3 ± 0.9 (95% CI)°C. The ambiguity between our novel method and the standard techniques could have direct patient safety implications because the IR micro-sensor measurements were beyond set limits. The spatial resolution of the measurement technique is not well defined in the IEC Standard and this has to be taken into consideration when selecting which measurement technique is used to determine the maximum surface temperature. PMID:21856072

  15. Simple and accurate methods for quantifying deformation, disruption, and development in biological tissues

    PubMed Central

    Boyle, John J.; Kume, Maiko; Wyczalkowski, Matthew A.; Taber, Larry A.; Pless, Robert B.; Xia, Younan; Genin, Guy M.; Thomopoulos, Stavros

    2014-01-01

    When mechanical factors underlie growth, development, disease or healing, they often function through local regions of tissue where deformation is highly concentrated. Current optical techniques to estimate deformation can lack precision and accuracy in such regions due to challenges in distinguishing a region of concentrated deformation from an error in displacement tracking. Here, we present a simple and general technique for improving the accuracy and precision of strain estimation and an associated technique for distinguishing a concentrated deformation from a tracking error. The strain estimation technique improves accuracy relative to other state-of-the-art algorithms by directly estimating strain fields without first estimating displacements, resulting in a very simple method and low computational cost. The technique for identifying local elevation of strain enables for the first time the successful identification of the onset and consequences of local strain concentrating features such as cracks and tears in a highly strained tissue. We apply these new techniques to demonstrate a novel hypothesis in prenatal wound healing. More generally, the analytical methods we have developed provide a simple tool for quantifying the appearance and magnitude of localized deformation from a series of digital images across a broad range of disciplines. PMID:25165601

  16. An Accurate Calibration Method Based on Velocity in a Rotational Inertial Navigation System

    PubMed Central

    Zhang, Qian; Wang, Lei; Liu, Zengjun; Feng, Peide

    2015-01-01

    Rotation modulation is an effective method to enhance the accuracy of an inertial navigation system (INS) by modulating the gyroscope drifts and accelerometer bias errors into periodically varying components. The typical RINS drives the inertial measurement unit (IMU) rotation along the vertical axis and the horizontal sensors’ errors are modulated, however, the azimuth angle error is closely related to vertical gyro drift, and the vertical gyro drift also should be modulated effectively. In this paper, a new rotation strategy in a dual-axis rotational INS (RINS) is proposed and the drifts of three gyros could be modulated, respectively. Experimental results from a real dual-axis RINS demonstrate that the maximum azimuth angle error is decreased from 0.04° to less than 0.01° during 1 h. Most importantly, the changing of rotation strategy leads to some additional errors in the velocity which is unacceptable in a high-precision INS. Then the paper studies the basic reason underlying horizontal velocity errors in detail and a relevant new calibration method is designed. Experimental results show that after calibration and compensation, the fluctuation and stages in the velocity curve disappear and velocity precision is improved. PMID:26225983

  17. Simple and accurate methods for quantifying deformation, disruption, and development in biological tissues.

    PubMed

    Boyle, John J; Kume, Maiko; Wyczalkowski, Matthew A; Taber, Larry A; Pless, Robert B; Xia, Younan; Genin, Guy M; Thomopoulos, Stavros

    2014-11-01

    When mechanical factors underlie growth, development, disease or healing, they often function through local regions of tissue where deformation is highly concentrated. Current optical techniques to estimate deformation can lack precision and accuracy in such regions due to challenges in distinguishing a region of concentrated deformation from an error in displacement tracking. Here, we present a simple and general technique for improving the accuracy and precision of strain estimation and an associated technique for distinguishing a concentrated deformation from a tracking error. The strain estimation technique improves accuracy relative to other state-of-the-art algorithms by directly estimating strain fields without first estimating displacements, resulting in a very simple method and low computational cost. The technique for identifying local elevation of strain enables for the first time the successful identification of the onset and consequences of local strain concentrating features such as cracks and tears in a highly strained tissue. We apply these new techniques to demonstrate a novel hypothesis in prenatal wound healing. More generally, the analytical methods we have developed provide a simple tool for quantifying the appearance and magnitude of localized deformation from a series of digital images across a broad range of disciplines. PMID:25165601

  18. An Accurate Calibration Method Based on Velocity in a Rotational Inertial Navigation System.

    PubMed

    Zhang, Qian; Wang, Lei; Liu, Zengjun; Feng, Peide

    2015-01-01

    Rotation modulation is an effective method to enhance the accuracy of an inertial navigation system (INS) by modulating the gyroscope drifts and accelerometer bias errors into periodically varying components. The typical RINS drives the inertial measurement unit (IMU) rotation along the vertical axis and the horizontal sensors' errors are modulated, however, the azimuth angle error is closely related to vertical gyro drift, and the vertical gyro drift also should be modulated effectively. In this paper, a new rotation strategy in a dual-axis rotational INS (RINS) is proposed and the drifts of three gyros could be modulated, respectively. Experimental results from a real dual-axis RINS demonstrate that the maximum azimuth angle error is decreased from 0.04° to less than 0.01° during 1 h. Most importantly, the changing of rotation strategy leads to some additional errors in the velocity which is unacceptable in a high-precision INS. Then the paper studies the basic reason underlying horizontal velocity errors in detail and a relevant new calibration method is designed. Experimental results show that after calibration and compensation, the fluctuation and stages in the velocity curve disappear and velocity precision is improved. PMID:26225983

  19. A method for the accurate and smooth approximation of standard thermodynamic functions

    NASA Astrophysics Data System (ADS)

    Coufal, O.

    2013-01-01

    A method is proposed for the calculation of approximations of standard thermodynamic functions. The method is consistent with the physical properties of standard thermodynamic functions. This means that the approximation functions are, in contrast to the hitherto used approximations, continuous and smooth in every temperature interval in which no phase transformations take place. The calculation algorithm was implemented by the SmoothSTF program in the C++ language which is part of this paper. Program summaryProgram title:SmoothSTF Catalogue identifier: AENH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3807 No. of bytes in distributed program, including test data, etc.: 131965 Distribution format: tar.gz Programming language: C++. Computer: Any computer with gcc version 4.3.2 compiler. Operating system: Debian GNU Linux 6.0. The program can be run in operating systems in which the gcc compiler can be installed, see http://gcc.gnu.org/install/specific.html. RAM: 256 MB are sufficient for the table of standard thermodynamic functions with 500 lines Classification: 4.9. Nature of problem: Standard thermodynamic functions (STF) of individual substances are given by thermal capacity at constant pressure, entropy and enthalpy. STF are continuous and smooth in every temperature interval in which no phase transformations take place. The temperature dependence of STF as expressed by the table of its values is for further application approximated by temperature functions. In the paper, a method is proposed for calculating approximation functions which, in contrast to the hitherto used approximations, are continuous and smooth in every temperature interval. Solution method: The approximation functions are

  20. An automated method for analysis of microcirculation videos for accurate assessment of tissue perfusion

    PubMed Central

    2012-01-01

    Background Imaging of the human microcirculation in real-time has the potential to detect injuries and illnesses that disturb the microcirculation at earlier stages and may improve the efficacy of resuscitation. Despite advanced imaging techniques to monitor the microcirculation, there are currently no tools for the near real-time analysis of the videos produced by these imaging systems. An automated system tool that can extract microvasculature information and monitor changes in tissue perfusion quantitatively might be invaluable as a diagnostic and therapeutic endpoint for resuscitation. Methods The experimental algorithm automatically extracts microvascular network and quantitatively measures changes in the microcirculation. There are two main parts in the algorithm: video processing and vessel segmentation. Microcirculatory videos are first stabilized in a video processing step to remove motion artifacts. In the vessel segmentation process, the microvascular network is extracted using multiple level thresholding and pixel verification techniques. Threshold levels are selected using histogram information of a set of training video recordings. Pixel-by-pixel differences are calculated throughout the frames to identify active blood vessels and capillaries with flow. Results Sublingual microcirculatory videos are recorded from anesthetized swine at baseline and during hemorrhage using a hand-held Side-stream Dark Field (SDF) imaging device to track changes in the microvasculature during hemorrhage. Automatically segmented vessels in the recordings are analyzed visually and the functional capillary density (FCD) values calculated by the algorithm are compared for both health baseline and hemorrhagic conditions. These results were compared to independently made FCD measurements using a well-known semi-automated method. Results of the fully automated algorithm demonstrated a significant decrease of FCD values. Similar, but more variable FCD values were calculated

  1. A More Accurate and Efficient Technique Developed for Using Computational Methods to Obtain Helical Traveling-Wave Tube Interaction Impedance

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    1999-01-01

    The phenomenal growth of commercial communications has created a great demand for traveling-wave tube (TWT) amplifiers. Although the helix slow-wave circuit remains the mainstay of the TWT industry because of its exceptionally wide bandwidth, until recently it has been impossible to accurately analyze a helical TWT using its exact dimensions because of the complexity of its geometrical structure. For the first time, an accurate three-dimensional helical model was developed that allows accurate prediction of TWT cold-test characteristics including operating frequency, interaction impedance, and attenuation. This computational model, which was developed at the NASA Lewis Research Center, allows TWT designers to obtain a more accurate value of interaction impedance than is possible using experimental methods. Obtaining helical slow-wave circuit interaction impedance is an important part of the design process for a TWT because it is related to the gain and efficiency of the tube. This impedance cannot be measured directly; thus, conventional methods involve perturbing a helical circuit with a cylindrical dielectric rod placed on the central axis of the circuit and obtaining the difference in resonant frequency between the perturbed and unperturbed circuits. A mathematical relationship has been derived between this frequency difference and the interaction impedance (ref. 1). However, because of the complex configuration of the helical circuit, deriving this relationship involves several approximations. In addition, this experimental procedure is time-consuming and expensive, but until recently it was widely accepted as the most accurate means of determining interaction impedance. The advent of an accurate three-dimensional helical circuit model (ref. 2) made it possible for Lewis researchers to fully investigate standard approximations made in deriving the relationship between measured perturbation data and interaction impedance. The most prominent approximations made

  2. An Inexpensive, Stable, and Accurate Relative Humidity Measurement Method for Challenging Environments

    PubMed Central

    Zhang, Wei; Ma, Hong; Yang, Simon X.

    2016-01-01

    In this research, an improved psychrometer is developed to solve practical issues arising in the relative humidity measurement of challenging drying environments for meat manufacturing in agricultural and agri-food industries. The design in this research focused on the structure of the improved psychrometer, signal conversion, and calculation methods. The experimental results showed the effect of varying psychrometer structure on relative humidity measurement accuracy. An industrial application to dry-cured meat products demonstrated the effective performance of the improved psychrometer being used as a relative humidity measurement sensor in meat-drying rooms. In a drying environment for meat manufacturing, the achieved measurement accuracy for relative humidity using the improved psychrometer was ±0.6%. The system test results showed that the improved psychrometer can provide reliable and long-term stable relative humidity measurements with high accuracy in the drying system of meat products. PMID:26999161

  3. A method to measure the density of seawater accurately to the level of 10-6

    NASA Astrophysics Data System (ADS)

    Schmidt, Hannes; Wolf, Henning; Hassel, Egon

    2016-04-01

    A substitution method to measure seawater density relative to pure water density using vibrating tube densimeters was realized and validated. Standard uncertainties of 1 g m-3 at atmospheric pressure, 10 g m-3 up to 10 MPa, and 20 g m-3 to 65 MPa in the temperature range of 5 °C to 35 °C and for salt contents up to 35 g kg-1 were achieved. The realization was validated by comparison measurements with a hydrostatic weighing apparatus for atmospheric pressure. For high pressures, literature values of seawater compressibility were compared with substitution measurements of the realized apparatus.

  4. Determining the performance of energy wheels: Part 1 -- Experimental and numerical methods

    SciTech Connect

    Simonson, C.J.; Ciepliski, D.L.; Besant, R.W.

    1999-07-01

    Measuring and modeling the performance of energy recovery devices is difficult and, in some cases, may result in unacceptably high uncertainties. In this paper, controlled laboratory experiments and a detailed numerical model are presented, which, together with uncertainty analysis, can quantify the performance of energy wheels. A numerical model that has been developed from physical principles and an experimental method for determining the performance of energy wheels with acceptable uncertainties are detailed. Included is a pre-test, during-test, and post-test uncertainty analysis that allows the experimenter to estimate accurately precision (random) and bias (fixed) errors a priori, during, and a posteriori each experiment using energy and mass balances on the air-to-air energy recovery device as well as the characteristics of each instrument and the data acquisition system. A comprehensive set of measured data for the sensible, latent, and total effectiveness of an energy wheel is compared with the corresponding simulation results in Part 2 of this paper.

  5. Numerical Simulation of Dynamic Contact Angles and Contact Lines in Multiphase Flows using Level Set Method

    NASA Astrophysics Data System (ADS)

    Pendota, Premchand

    Many physical phenomena and industrial applications involve multiphase fluid flows and hence it is of high importance to be able to simulate various aspects of these flows accurately. The Dynamic Contact Angles (DCA) and the contact lines at the wall boundaries are a couple of such important aspects. In the past few decades, many mathematical models were developed for predicting the contact angles of the inter-face with the wall boundary under various flow conditions. These models are used to incorporate the physics of DCA and contact line motion in numerical simulations using various interface capturing/tracking techniques. In the current thesis, a simple approach to incorporate the static and dynamic contact angle boundary conditions using the level set method is developed and implemented in multiphase CFD codes, LIT (Level set Interface Tracking) (Herrmann (2008)) and NGA (flow solver) (Desjardins et al (2008)). Various DCA models and associated boundary conditions are reviewed. In addition, numerical aspects such as the occurrence of a stress singularity at the contact lines and grid convergence of macroscopic interface shape are dealt with in the context of the level set approach.

  6. An accurate Rb density measurement method for a plasma wakefield accelerator experiment using a novel Rb reservoir

    NASA Astrophysics Data System (ADS)

    Öz, E.; Batsch, F.; Muggli, P.

    2016-09-01

    A method to accurately measure the density of Rb vapor is described. We plan on using this method for the Advanced Wakefield (AWAKE) (Assmann et al., 2014 [1]) project at CERN , which will be the world's first proton driven plasma wakefield experiment. The method is similar to the hook (Marlow, 1967 [2]) method and has been described in great detail in the work by Hill et al. (1986) [3]. In this method a cosine fit is applied to the interferogram to obtain a relative accuracy on the order of 1% for the vapor density-length product. A single-mode, fiber-based, Mach-Zenhder interferometer will be built and used near the ends of the 10 meter-long AWAKE plasma source to be able to make accurate relative density measurement between these two locations. This can then be used to infer the vapor density gradient along the AWAKE plasma source and also change it to the value desired for the plasma wakefield experiment. Here we describe the plan in detail and show preliminary results obtained using a prototype 8 cm long novel Rb vapor cell.

  7. A New Method for Accurate Signal Processing in Measurements of Elemental Mercury Vapor by Atomic Fluorescence Spectrophotometry

    NASA Astrophysics Data System (ADS)

    Ambrose, J. L., II; Jaffe, D. A.

    2015-12-01

    The most widely used method for quantifying atmospheric Hg is gold amalgamation pre-concentration, followed by thermal desorption (TD) and detection via atomic fluorescence spectrophotometry (AFS). Most AFS-based atmospheric Hg measurements are carried out using commercial analyzers manufactured by Tekran® Instruments Corp. (instrument models 2537A and 2537B). A generally overlooked and poorly characterized source of analytical uncertainty in these measurements is the method by which the raw Hg AFS signal is processed. In nearly all applications of Tekran® analyzers for atmospheric Hg measurements, researchers rely upon embedded software which automatically integrates the Hg TD peaks. However, Swartzendruber et al. (2009; doi:10.1016/j.atmosenv.2009.02.063) demonstrated that the Hg TD peaks can be more accurately defined, and overall measurement precision increased, by post-processing the raw Hg AFS signal; improvements in measurement accuracy and precision were shown to be more significant at lower sample loadings. Despite these findings, a standardized method for signal post-processing has not been presented. To better characterize uncertainty associated with Tekran® based atmospheric Hg measurements, and to facilitate more widespread adoption of an accurate, standardized signal processing method, we developed a new, distributable Virtual Instrument (VI) which performs semi-automated post-processing of the raw Hg AFS signal from the Tekran® analyzers. Here we describe the key features of the VI and compare its performance to that of the Tekran® signal processing method.

  8. AN ACCURATE NEW METHOD OF CALCULATING ABSOLUTE MAGNITUDES AND K-CORRECTIONS APPLIED TO THE SLOAN FILTER SET

    SciTech Connect

    Beare, Richard; Brown, Michael J. I.; Pimbblet, Kevin

    2014-12-20

    We describe an accurate new method for determining absolute magnitudes, and hence also K-corrections, that is simpler than most previous methods, being based on a quadratic function of just one suitably chosen observed color. The method relies on the extensive and accurate new set of 129 empirical galaxy template spectral energy distributions from Brown et al. A key advantage of our method is that we can reliably estimate random errors in computed absolute magnitudes due to galaxy diversity, photometric error and redshift error. We derive K-corrections for the five Sloan Digital Sky Survey filters and provide parameter tables for use by the astronomical community. Using the New York Value-Added Galaxy Catalog, we compare our K-corrections with those from kcorrect. Our K-corrections produce absolute magnitudes that are generally in good agreement with kcorrect. Absolute griz magnitudes differ by less than 0.02 mag and those in the u band by ∼0.04 mag. The evolution of rest-frame colors as a function of redshift is better behaved using our method, with relatively few galaxies being assigned anomalously red colors and a tight red sequence being observed across the whole 0.0 < z < 0.5 redshift range.

  9. A novel model for diffusion based release kinetics using an inverse numerical method.

    PubMed

    Mohammadi, Hadi; Herzog, Walter

    2011-10-01

    We developed and analyzed an inverse numerical model based on Fick's second law on the dynamics of drug release. In contrast to previous models which required two state descriptions of diffusion for long- and short-term release processes, our model is valid for the entire release process. The proposed model may be used for identifying and reducing experimental errors associated with measurements of diffusion based release kinetics. Knowing the initial and boundary conditions, and assuming Fick's second law to be appropriate, we use the methods of Lagrange multiplier along with least-square algorithms to define a cost function which is discretized using finite difference methods and is optimized so as to minimize errors. Our model can describe diffusion based release kinetics for static and dynamic conditions as accurately as finite element methods, but results are obtained in a fraction of CPU time. Our method can be widely used for drug release procedures and for tissue engineering/repair applications where oxygenation of cells residing within a matrix is important. PMID:21382735

  10. Some variance reduction methods for numerical stochastic homogenization.

    PubMed

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. PMID:27002065

  11. An Improved Numerical Integration Method for Springback Predictions

    NASA Astrophysics Data System (ADS)

    Ibrahim, R.; Smith, L. M.; Golovashchenko, Sergey F.

    2011-08-01

    In this investigation, the focus is on the springback of steel sheets in V-die air bending. A full replication to a numerical integration algorithm presented rigorously in [1] to predict the springback in air bending was performed and confirmed successfully. Algorithm alteration and extensions were proposed here. The altered approach used in solving the moment equation numerically resulted in springback values much closer to the trend presented by the experimental data, Although investigation here extended to use a more realistic work-hardening model, the differences in the springback values obtained by both hardening models were almost negligible. The algorithm was extended to be applied on thin sheets down to 0.8 mm. Results show that this extension is possible as verified by FEA and other published experiments on TRIP steel sheets.

  12. Accurate hydrogen bond energies within the density functional tight binding method.

    PubMed

    Domínguez, A; Niehaus, T A; Frauenheim, T

    2015-04-01

    The density-functional-based tight-binding (DFTB) approach has been recently extended by incorporating one-center exchange-like terms in the expansion of the multicenter integrals. This goes beyond the Mulliken approximation and leads to a scheme which treats in a self-consistent way the fluctuations of the whole dual density matrix and not only its diagonal elements (Mulliken charges). To date, only the performance of this new formalism to reproduce excited-state properties has been assessed (Domínguez et al. J. Chem. Theory Comput., 2013, 9, 4901-4914). Here we study the effect of our corrections on the computation of hydrogen bond energies for water clusters and water-containing systems. The limitations of traditional DFTB to reproduce hydrogen bonds has been acknowledged often. We compare our results for a set of 22 small water clusters and water-containing systems as well as for five water hexadecamers to those obtained with the DFTB3 method. Additionally, we combine our extension with a third-order energy expansion in the charge fluctuations. Our results show that the new formalisms significantly improve upon original DFTB. PMID:25763597

  13. DISPLAR: an accurate method for predicting DNA-binding sites on protein surfaces

    PubMed Central

    Tjong, Harianto; Zhou, Huan-Xiang

    2007-01-01

    Structural and physical properties of DNA provide important constraints on the binding sites formed on surfaces of DNA-targeting proteins. Characteristics of such binding sites may form the basis for predicting DNA-binding sites from the structures of proteins alone. Such an approach has been successfully developed for predicting protein–protein interface. Here this approach is adapted for predicting DNA-binding sites. We used a representative set of 264 protein–DNA complexes from the Protein Data Bank to analyze characteristics and to train and test a neural network predictor of DNA-binding sites. The input to the predictor consisted of PSI-blast sequence profiles and solvent accessibilities of each surface residue and 14 of its closest neighboring residues. Predicted DNA-contacting residues cover 60% of actual DNA-contacting residues and have an accuracy of 76%. This method significantly outperforms previous attempts of DNA-binding site predictions. Its application to the prion protein yielded a DNA-binding site that is consistent with recent NMR chemical shift perturbation data, suggesting that it can complement experimental techniques in characterizing protein–DNA interfaces. PMID:17284455

  14. Bacterial Cytological Profiling (BCP) as a Rapid and Accurate Antimicrobial Susceptibility Testing Method for Staphylococcus aureus

    PubMed Central

    Quach, D.T.; Sakoulas, G.; Nizet, V.; Pogliano, J.; Pogliano, K.

    2016-01-01

    Successful treatment of bacterial infections requires the timely administration of appropriate antimicrobial therapy. The failure to initiate the correct therapy in a timely fashion results in poor clinical outcomes, longer hospital stays, and higher medical costs. Current approaches to antibiotic susceptibility testing of cultured pathogens have key limitations ranging from long run times to dependence on prior knowledge of genetic mechanisms of resistance. We have developed a rapid antimicrobial susceptibility assay for Staphylococcus aureus based on bacterial cytological profiling (BCP), which uses quantitative fluorescence microscopy to measure antibiotic induced changes in cellular architecture. BCP discriminated between methicillin-susceptible (MSSA) and -resistant (MRSA) clinical isolates of S. aureus (n = 71) within 1–2 h with 100% accuracy. Similarly, BCP correctly distinguished daptomycin susceptible (DS) from daptomycin non-susceptible (DNS) S. aureus strains (n = 20) within 30 min. Among MRSA isolates, BCP further identified two classes of strains that differ in their susceptibility to specific combinations of beta-lactam antibiotics. BCP provides a rapid and flexible alternative to gene-based susceptibility testing methods for S. aureus, and should be readily adaptable to different antibiotics and bacterial species as new mechanisms of resistance or multidrug-resistant pathogens evolve and appear in mainstream clinical practice. PMID:26981574

  15. Blood Pressure over Height Ratios: Simple and Accurate Method of Detecting Elevated Blood Pressure in Children.

    PubMed

    Galescu, Ovidiu; George, Minu; Basetty, Sudhakar; Predescu, Iuliana; Mongia, Anil; Ten, Svetlana; Bhangoo, Amrit

    2012-01-01

    Background. Blood pressure (BP) percentiles in childhood are assessed according to age, gender, and height. Objective. To create a simple BP/height ratio for both systolic BP (SBP) and diastolic BP (DBP). To study the relationship between BP/height ratios and corresponding BP percentiles in children. Methods. We analyzed data on height and BP from 2006-2007 NHANES data. BP percentiles were calculated for 3775 children. Receiver-operating characteristic (ROC) curve analyses were performed to calculate sensitivity and specificity of BP/height ratios as diagnostic tests for elevated BP (>90%). Correlation analysis was performed between BP percentiles and BP/height ratios. Results. The average age was 12.54 ± 2.67 years. SBP/height and DBP/height ratios strongly correlated with SBP & DBP percentiles in both boys (P < 0.001, R(2) = 0.85, R(2) = 0.86) and girls (P < 0.001, R(2) = 0.85, R(2) = 0.90). The cutoffs of SBP/height and DBP/height ratios in boys were ≥0.75 and ≥0.46, respectively; in girls the ratios were ≥0.75 and ≥0.48, respectively with sensitivity and specificity in range of 83-100%. Conclusion. BP/height ratios are simple with high sensitivity and specificity to detect elevated BP in children. These ratios can be easily used in routine medical care of children. PMID:22577400

  16. Blood Pressure over Height Ratios: Simple and Accurate Method of Detecting Elevated Blood Pressure in Children

    PubMed Central

    Galescu, Ovidiu; George, Minu; Basetty, Sudhakar; Predescu, Iuliana; Mongia, Anil; Ten, Svetlana; Bhangoo, Amrit

    2012-01-01

    Background. Blood pressure (BP) percentiles in childhood are assessed according to age, gender, and height. Objective. To create a simple BP/height ratio for both systolic BP (SBP) and diastolic BP (DBP). To study the relationship between BP/height ratios and corresponding BP percentiles in children. Methods. We analyzed data on height and BP from 2006-2007 NHANES data. BP percentiles were calculated for 3775 children. Receiver-operating characteristic (ROC) curve analyses were performed to calculate sensitivity and specificity of BP/height ratios as diagnostic tests for elevated BP (>90%). Correlation analysis was performed between BP percentiles and BP/height ratios. Results. The average age was 12.54 ± 2.67 years. SBP/height and DBP/height ratios strongly correlated with SBP & DBP percentiles in both boys (P < 0.001, R2 = 0.85, R2 = 0.86) and girls (P < 0.001, R2 = 0.85, R2 = 0.90). The cutoffs of SBP/height and DBP/height ratios in boys were ≥0.75 and ≥0.46, respectively; in girls the ratios were ≥0.75 and ≥0.48, respectively with sensitivity and specificity in range of 83–100%. Conclusion. BP/height ratios are simple with high sensitivity and specificity to detect elevated BP in children. These ratios can be easily used in routine medical care of children. PMID:22577400

  17. Bacterial Cytological Profiling (BCP) as a Rapid and Accurate Antimicrobial Susceptibility Testing Method for Staphylococcus aureus.

    PubMed

    Quach, D T; Sakoulas, G; Nizet, V; Pogliano, J; Pogliano, K

    2016-02-01

    Successful treatment of bacterial infections requires the timely administration of appropriate antimicrobial therapy. The failure to initiate the correct therapy in a timely fashion results in poor clinical outcomes, longer hospital stays, and higher medical costs. Current approaches to antibiotic susceptibility testing of cultured pathogens have key limitations ranging from long run times to dependence on prior knowledge of genetic mechanisms of resistance. We have developed a rapid antimicrobial susceptibility assay for Staphylococcus aureus based on bacterial cytological profiling (BCP), which uses quantitative fluorescence microscopy to measure antibiotic induced changes in cellular architecture. BCP discriminated between methicillin-susceptible (MSSA) and -resistant (MRSA) clinical isolates of S. aureus (n = 71) within 1-2 h with 100% accuracy. Similarly, BCP correctly distinguished daptomycin susceptible (DS) from daptomycin non-susceptible (DNS) S. aureus strains (n = 20) within 30 min. Among MRSA isolates, BCP further identified two classes of strains that differ in their susceptibility to specific combinations of beta-lactam antibiotics. BCP provides a rapid and flexible alternative to gene-based susceptibility testing methods for S. aureus, and should be readily adaptable to different antibiotics and bacterial species as new mechanisms of resistance or multidrug-resistant pathogens evolve and appear in mainstream clinical practice. PMID:26981574

  18. An efficient method for accurate segmentation of LV in contrast-enhanced cardiac MR images

    NASA Astrophysics Data System (ADS)

    Suryanarayana K., Venkata; Mitra, Abhishek; Srikrishnan, V.; Jo, Hyun Hee; Bidesi, Anup

    2016-03-01

    Segmentation of left ventricle (LV) in contrast-enhanced cardiac MR images is a challenging task because of high variability in the image intensity. This is due to a) wash-in and wash-out of the contrast agent over time and b) poor contrast around the epicardium (outer wall) region. Current approaches for segmentation of the endocardium (inner wall) usually involve application of a threshold within the region of interest, followed by refinement techniques like active contours. A limitation of this method is under-segmentation of the inner wall because of gradual loss of contrast at the wall boundary. On the other hand, the challenge in outer wall segmentation is the lack of reliable boundaries because of poor contrast. There are four main contributions in this paper to address the aforementioned issues. First, a seed image is selected using variance based approach on 4D time-frame images over which initial endocardium and epicardium is segmented. Secondly, we propose a patch based feature which overcomes the problem of gradual contrast loss for LV endocardium segmentation. Third, we propose a novel Iterative-Edge-Refinement (IER) technique for epicardium segmentation. Fourth, we propose a greedy search algorithm for propagating the initial contour segmented on seed-image across other time frame images. We have experimented our technique on five contrast-enhanced cardiac MR Datasets (4D) having a total of 1097 images. The segmentation results for all 1097 images have been visually inspected by a clinical expert and have shown good accuracy.

  19. A hybrid method for efficient and accurate simulations of diffusion compartment imaging signals

    NASA Astrophysics Data System (ADS)

    Rensonnet, Gaëtan; Jacobs, Damien; Macq, Benoît; Taquet, Maxime

    2015-12-01

    Diffusion-weighted imaging is sensitive to the movement of water molecules through the tissue microstructure and can therefore be used to gain insight into the tissue cellular architecture. While the diffusion signal arising from simple geometrical microstructure is known analytically, it remains unclear what diffusion signal arises from complex microstructural configurations. Such knowledge is important to design optimal acquisition sequences, to understand the limitations of diffusion-weighted imaging and to validate novel models of the brain microstructure. We present a novel framework for the efficient simulation of high-quality DW-MRI signals based on the hybrid combination of exact analytic expressions in simple geometric compartments such as cylinders and spheres and Monte Carlo simulations in more complex geometries. We validate our approach on synthetic arrangements of parallel cylinders representing the geometry of white matter fascicles, by comparing it to complete, all-out Monte Carlo simulations commonly used in the literature. For typical configurations, equal levels of accuracy are obtained with our hybrid method in less than one fifth of the computational time required for Monte Carlo simulations.

  20. Method for accurately positioning a device at a desired area of interest

    DOEpatents

    Jones, Gary D.; Houston, Jack E.; Gillen, Kenneth T.

    2000-01-01

    A method for positioning a first device utilizing a surface having a viewing translation stage, the surface being movable between a first position where the viewing stage is in operational alignment with a first device and a second position where the viewing stage is in operational alignment with a second device. The movable surface is placed in the first position and an image is produced with the first device of an identifiable characteristic of a calibration object on the viewing stage. The moveable surface is then placed in the second position and only the second device is moved until an image of the identifiable characteristic in the second device matches the image from the first device. The calibration object is then replaced on the stage of the surface with a test object, and the viewing translation stage is adjusted until the second device images the area of interest. The surface is then moved to the first position where the test object is scanned with the first device to image the area of interest. An alternative embodiment where the devices move is also disclosed.

  1. A Novel method of ensuring safe and accurate dilatation during percutaneous nephrolithotomy

    PubMed Central

    Javali, Tarun; Pathade, Amey; Nagaraj, H. K.

    2015-01-01

    ABSTRACT Objective: To report our technique that helps locate the guidewire into the ureter enabling safe dilatation during PCNL. Materials and Methods: Cases in which the guidewire failed to pass into the ureter following successful puncture of the desired calyx were subjected to this technique. A second guidewire was passed through the outer sheath of a 9 Fr. metallic dilator cannula, passed over the first guidewire. The cannula and outer sheath were removed, followed by percutaneous passage of a 6/7.5 Fr ureteroscope between the two guidewires, monitoring its progress through both the endoscopic and fluoroscopic monitors. Once the stone was visualized in the calyx a guidewire was passed through the working channel and maneuvered past the stone into the pelvis and ureter under direct endoscopic vision. This was followed by routine tract dilatation. Results: This technique was employed in 85 out of 675 cases of PCNL carried out at our institute between Jan 2010 to June 2014. The mean time required for our technique, calculated from the point of introduction of the ureteroscope untill the successful passage of the guidewire down into the ureter was 95 seconds. There were no intraoperative or postoperative complications as a result of this technique. Guidewire could be successfully passed into the ureter in 82 out of 85 cases. Conclusions: Use of the ureteroscope introduced percutaneously through the puncture site in PCNL, is a safe and effective technique that helps in maneuvering the guidewire down into the ureter, which subsequently enables safe dilatation. PMID:26689529

  2. Algorithms for the Fractional Calculus: A Selection of Numerical Methods

    NASA Technical Reports Server (NTRS)

    Diethelm, K.; Ford, N. J.; Freed, A. D.; Luchko, Yu.

    2003-01-01

    Many recently developed models in areas like viscoelasticity, electrochemistry, diffusion processes, etc. are formulated in terms of derivatives (and integrals) of fractional (non-integer) order. In this paper we present a collection of numerical algorithms for the solution of the various problems arising in this context. We believe that this will give the engineer the necessary tools required to work with fractional models in an efficient way.

  3. On numerical methods in non-Newtonian flows

    NASA Astrophysics Data System (ADS)

    Fileas, G.

    1982-12-01

    The constitutive equations for non-Newtonian flows are presented and the various flow models derived from continuum mechanics and molecular theories are considered and evaluated. Detailed account is given of numerical simulation employing differential and integral models of different kinds of non-Newtonian flows using finite difference and finite element techniques. Procedures for computer set ups are described and references are given for finite difference, finite element and molecular theory based programs for several kinds of flow. Achievements and unreached goals in the field of numerical simulation of non-Newtonian flows are discussed and the lack of numerical work in the fields of suspension flows and heat transfer is pointed out. Finally, FFOCUS is presented as a newly built computer program which can simulate freezing flows of Newtonian fluids through various geometries and is aimed to be further developed to handle non-Newtonian freezing flows and certain types of suspension phenomena involved in corium flow after a hypothetical core melt down accident in a pressurized water reactor.

  4. A modified method for accurate correlation between the craze density and the optomechanical properties of fibers using pluta microscope.

    PubMed

    Sokkar, T Z N; El-Farahaty, K A; El-Bakary, M A; Omar, E Z; Hamza, A A

    2016-05-01

    A modified method was suggested to improve the performance of the Pluta microscope in its nonduplicated mode in the calculation of the areal craze density especially, for relatively low draw ratio (low areal craze density). This method decreases the error that is resulted from the similarity between the formed crazes and the dark fringes of the interference pattern. Furthermore, an accurate method to calculate the birefringence and the orientation function of the drawn fibers via nonduplicated Pluta polarizing interference microscope for high areal craze density (high draw ratio) was suggested. The advantage of the suggested method is to relate the optomechanical properties of the tested fiber with the areal craze density, for the same region of the fiber material. Microsc. Res. Tech. 79:422-430, 2016. © 2016 Wiley Periodicals, Inc. PMID:26920339

  5. A three-dimensional, compressible, laminar boundary-layer method for general fuselages. Volume 1: Numerical method

    NASA Technical Reports Server (NTRS)

    Wie, Yong-Sun

    1990-01-01

    A procedure for calculating 3-D, compressible laminar boundary layer flow on general fuselage shapes is described. The boundary layer solutions can be obtained in either nonorthogonal 'body oriented' coordinates or orthogonal streamline coordinates. The numerical procedure is 'second order' accurate, efficient and independent of the cross flow velocity direction. Numerical results are presented for several test cases, including a sharp cone, an ellipsoid of revolution, and a general aircraft fuselage at angle of attack. Comparisons are made between numerical results obtained using nonorthogonal curvilinear 'body oriented' coordinates and streamline coordinates.

  6. Achieving better cooling of turbine blades using numerical simulation methods

    NASA Astrophysics Data System (ADS)

    Inozemtsev, A. A.; Tikhonov, A. S.; Sendyurev, C. I.; Samokhvalov, N. Yu.

    2013-02-01

    A new design of the first-stage nozzle vane for the turbine of a prospective gas-turbine engine is considered. The blade's thermal state is numerically simulated in conjugate statement using the ANSYS CFX 13.0 software package. Critical locations in the blade design are determined from the distribution of heat fluxes, and measures aimed at achieving more efficient cooling are analyzed. Essentially lower (by 50-100°C) maximal temperature of metal has been achieved owing to the results of the performed work.

  7. Magnetohydrodynamic (MHD) modelling of solar active phenomena via numerical methods

    NASA Technical Reports Server (NTRS)

    Wu, S. T.

    1988-01-01

    Numerical ideal MHD models for the study of solar active phenomena are summarized. Particular attention is given to the following physical phenomena: (1) local heating of a coronal loop in an isothermal and stratified atmosphere, and (2) the coronal dynamic responses due to magnetic field movement. The results suggest that local heating of a magnetic loop will lead to the enhancement of the density of the neighboring loops through MHD wave compression. It is noted that field lines can be pinched off and may form a self-contained magnetized plasma blob that may move outward into interplanetary space.

  8. Path Integrals and Exotic Options:. Methods and Numerical Results

    NASA Astrophysics Data System (ADS)

    Bormetti, G.; Montagna, G.; Moreni, N.; Nicrosini, O.

    2005-09-01

    In the framework of Black-Scholes-Merton model of financial derivatives, a path integral approach to option pricing is presented. A general formula to price path dependent options on multidimensional and correlated underlying assets is obtained and implemented by means of various flexible and efficient algorithms. As an example, we detail the case of Asian call options. The numerical results are compared with those obtained with other procedures used in quantitative finance and found to be in good agreement. In particular, when pricing at the money (ATM) and out of the money (OTM) options, path integral exhibits competitive performances.

  9. Numerical methods for a general class of porous medium equations

    SciTech Connect

    Rose, M. E.

    1980-03-01

    The partial differential equation par. deltau/par. deltat + par. delta(f(u))/par. deltax = par. delta(g(u)par. deltau/par. deltax)/par. deltax, where g(u) is a non-negative diffusion coefficient that may vanish for one or more values of u, was used to model fluid flow through a porous medium. Error estimates for a numerical procedure to approximate the solution are derived. A revised version of this report will appear in Computers and Mathematics with Applications.

  10. Statistical and numerical methods to improve the transient divided bar method

    NASA Astrophysics Data System (ADS)

    Bording, Thue; Bom Nielsen, Søren; Balling, Niels

    2014-05-01

    A key element in studying subsurface heat transfer processes is accurate knowledge of the thermal properties. These properties include thermal conductivity, thermal diffusivity and heat capacity. The divided bar method is a commonly used method to estimate thermal conductivity of rock samples. In the method's simplest form, a fixed temperature difference is imposed on a stack consisting of the rock sample and a standard material with known thermal conductivity. Temperature measurements along the stack are used to estimate the temperature gradients and the thermal conductivity of the sample can then be found by Fourier's law. We present several improvements to this method that allows for simultaneous measurements of both thermal conductivity and thermal diffusivity. The divided bar setup is run in a transient mode, and a time-dependent temperature profile is measured at four points along the stack: on either side of the sample and at the top and bottom of the stack. To induce a thermal signal, a time-varying temperature is imposed at one end of the stack during measurements. Using the measured temperatures at both ends as Dirichlet boundary conditions, a finite element procedure is used to model the temperature profile. This procedure is used as the forward model. A Markov Chain Monte Carlo Metropolis Hastings algorithm is used for the inversion modelling. The unknown parameters are thermal conductivity and volumetric heat capacity of the sample and the contact resistances between the elements in the stack. The contact resistances are not resolved and must be made as small as possible by careful sample preparation and stack assembly. Histograms of the unknown parameters are produced. The ratio of thermal conductivity and volumetric heat capacity yields a histogram of thermal diffusivity. Since density can be measured independently, the specific heat capacity is also obtained. The main improvement with this method is that not only are we able to measure thermal

  11. Accurate quantification of tio2 nanoparticles collected on air filters using a microwave-assisted acid digestion method.

    PubMed

    Mudunkotuwa, Imali A; Anthony, T Renée; Grassian, Vicki H; Peters, Thomas M

    2016-01-01

    Titanium dioxide (TiO(2)) particles, including nanoparticles with diameters smaller than 100 nm, are used extensively in consumer products. In a 2011 current intelligence bulletin, the National Institute of Occupational Safety and Health (NIOSH) recommended methods to assess worker exposures to fine and ultrafine TiO(2) particles and associated occupational exposure limits for these particles. However, there are several challenges and problems encountered with these recommended exposure assessment methods involving the accurate quantitation of titanium dioxide collected on air filters using acid digestion followed by inductively coupled plasma optical emission spectroscopy (ICP-OES). Specifically, recommended digestion methods include the use of chemicals, such as perchloric acid, which are typically unavailable in most accredited industrial hygiene laboratories due to highly corrosive and oxidizing properties. Other alternative methods that are used typically involve the use of nitric acid or combination of nitric acid and sulfuric acid, which yield very poor recoveries for titanium dioxide. Therefore, given the current state of the science, it is clear that a new method is needed for exposure assessment. In this current study, a microwave-assisted acid digestion method has been specifically designed to improve the recovery of titanium in TiO(2) nanoparticles for quantitative analysis using ICP-OES. The optimum digestion conditions were determined by changing several variables including the acids used, digestion time, and temperature. Consequently, the optimized digestion temperature of 210°C with concentrated sulfuric and nitric acid (2:1 v/v) resulted in a recovery of >90% for TiO(2). The method is expected to provide for a more accurate quantification of airborne TiO(2) particles in the workplace environment. PMID:26181824

  12. A Numerical Method for Simulating the Microscopic Damage Evolution in Composites Under Uniaxial Transverse Tension

    NASA Astrophysics Data System (ADS)

    Zhi, Jie; Zhao, Libin; Zhang, Jianyu; Liu, Zhanli

    2016-06-01

    In this paper, a new numerical method that combines a surface-based cohesive model and extended finite element method (XFEM) without predefining the crack paths is presented to simulate the microscopic damage evolution in composites under uniaxial transverse tension. The proposed method is verified to accurately capture the crack kinking into the matrix after fiber/matrix debonding. A statistical representative volume element (SRVE) under periodic boundary conditions is used to approximate the microstructure of the composites. The interface parameters of the cohesive models are investigated, in which the initial interface stiffness has a great effect on the predictions of the fiber/matrix debonding. The detailed debonding states of SRVE with strong and weak interfaces are compared based on the surface-based and element-based cohesive models. The mechanism of damage in composites under transverse tension is described as the appearance of the interface cracks and their induced matrix micro-cracking, both of which coalesce into transversal macro-cracks. Good agreement is found between the predictions of the model and the in situ experimental observations, demonstrating the efficiency of the presented model for simulating the microscopic damage evolution in composites.

  13. Accurate computation of the radiation from simple antennas using the finite-difference time-domain method

    NASA Astrophysics Data System (ADS)

    Maloney, James G.; Smith, Glenn S.; Scott, Waymond R., Jr.

    1990-07-01

    Two antennas are considered, a cylindrical monopole and a conical monopole. Both are driven through an image plane from a coaxial transmission line. Each of these antennas corresponds to a well-posed theoretical electromagnetic boundary value problem and a realizable experimental model. These antennas are analyzed by a straightforward application of the time-domain finite-difference method. The computed results for these antennas are shown to be in excellent agreement with accurate experimental measurements for both the time domain and the frequency domain. The graphical displays presented for the transient near-zone and far-zone radiation from these antennas provide physical insight into the radiation process.

  14. An ONIOM study of the Bergman reaction: a computationally efficient and accurate method for modeling the enediyne anticancer antibiotics

    NASA Astrophysics Data System (ADS)

    Feldgus, Steven; Shields, George C.

    2001-10-01

    The Bergman cyclization of large polycyclic enediyne systems that mimic the cores of the enediyne anticancer antibiotics was studied using the ONIOM hybrid method. Tests on small enediynes show that ONIOM can accurately match experimental data. The effect of the triggering reaction in the natural products is investigated, and we support the argument that it is strain effects that lower the cyclization barrier. The barrier for the triggered molecule is very low, leading to a reasonable half-life at biological temperatures. No evidence is found that would suggest a concerted cyclization/H-atom abstraction mechanism is necessary for DNA cleavage.

  15. Projection methods for the numerical solution of Markov chain models

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1989-01-01

    Projection methods for computing stationary probability distributions for Markov chain models are presented. A general projection method is a method which seeks an approximation from a subspace of small dimension to the original problem. Thus, the original matrix problem of size N is approximated by one of dimension m, typically much smaller than N. A particularly successful class of methods based on this principle is that of Krylov subspace methods which utilize subspaces of the form span(v,av,...,A(exp m-1)v). These methods are effective in solving linear systems and eigenvalue problems (Lanczos, Arnoldi,...) as well as nonlinear equations. They can be combined with more traditional iterative methods such as successive overrelaxation, symmetric successive overrelaxation, or with incomplete factorization methods to enhance convergence.

  16. Rapid, Precise, and Accurate Counts of Symbiodinium Cells Using the Guava Flow Cytometer, and a Comparison to Other Methods

    PubMed Central

    Caruso, Carlo; Burriesci, Matthew S.; Cella, Kristen; Pringle, John R.

    2015-01-01

    In studies of both the establishment and breakdown of cnidarian-dinoflagellate symbiosis, it is often necessary to determine the number of Symbiodinium cells relative to the quantity of host tissue. Ideally, the methods used should be rapid, precise, and accurate. In this study, we systematically evaluated methods for sample preparation and storage and the counting of algal cells using the hemocytometer, a custom image-analysis program for automated counting of the fluorescent algal cells, the Coulter Counter, or the Millipore Guava flow-cytometer. We found that although other methods may have value in particular applications, for most purposes, the Guava flow cytometer provided by far the best combination of precision, accuracy, and efficient use of investigator time (due to the instrument's automated sample handling), while also allowing counts of algal numbers over a wide range and in small volumes of tissue homogenate. We also found that either of two assays of total homogenate protein provided a precise and seemingly accurate basis for normalization of algal counts to the total amount of holobiont tissue. PMID:26291447

  17. A rapid, economical, and accurate method to determining the physical risk of storm marine inundations using sedimentary evidence

    NASA Astrophysics Data System (ADS)

    Nott, Jonathan F.

    2015-04-01

    The majority of physical risk assessments from storm surge inundations are derived from synthetic time series generated from short climate records, which can often result in inaccuracies and are time-consuming and expensive to develop. A new method is presented here for the wet tropics region of northeast Australia. It uses lidar-generated topographic cross sections of beach ridge plains, which have been demonstrated to be deposited by marine inundations generated by tropical cyclones. Extreme value theory statistics are applied to data derived from the cross sections to generate return period plots for a given location. The results suggest that previous methods to estimate return periods using synthetic data sets have underestimated the magnitude/frequency relationship by at least an order of magnitude. The new method promises to be a more rapid, economical, and accurate assessment of the physical risk of these events.

  18. Numerical conformal mapping methods for exterior and doubly connected regions

    SciTech Connect

    DeLillo, T.K.; Pfaltzgraff, J.A.

    1996-12-31

    Methods are presented and analyzed for approximating the conformal map from the exterior of the disk to the exterior a smooth, simple closed curve and from an annulus to a bounded, doubly connected region with smooth boundaries. The methods are Newton-like methods for computing the boundary correspondences and conformal moduli similar to Fornberg`s method for the interior of the disk. We show that the linear systems are discretizations of the identity plus a compact operator and, hence, that the conjugate gradient method converges superlinearly.

  19. Method for numerical simulation of two-term exponentially correlated colored noise

    SciTech Connect

    Yilmaz, B.; Ayik, S.; Abe, Y.; Gokalp, A.; Yilmaz, O.

    2006-04-15

    A method for numerical simulation of two-term exponentially correlated colored noise is proposed. The method is an extension of traditional method for one-term exponentially correlated colored noise. The validity of the algorithm is tested by comparing numerical simulations with analytical results in two physical applications.

  20. Numerical Modeling of Deep Mantle Convection: Advection and Diffusion Schemes for Marker Methods

    NASA Astrophysics Data System (ADS)

    Mulyukova, Elvira; Dabrowski, Marcin; Steinberger, Bernhard

    2013-04-01

    Thermal and chemical evolution of Earth's deep mantle can be studied by modeling vigorous convection in a chemically heterogeneous fluid. Numerical modeling of such a system poses several computational challenges. Dominance of heat advection over the diffusive heat transport, and a negligible amount of chemical diffusion results in sharp gradients of thermal and chemical fields. The exponential dependence of the viscosity of mantle materials on temperature also leads to high gradients of the velocity field. The accuracy of many numerical advection schemes degrades quickly with increasing gradient of the solution, while the computational effort, in terms of the scheme complexity and required resolution, grows. Additional numerical challenges arise due to a large range of length-scales characteristic of a thermochemical convection system with highly variable viscosity. To examplify, the thickness of the stem of a rising thermal plume may be a few percent of the mantle thickness. An even thinner filament of an anomalous material that is entrained by that plume may consitute less than a tenth of a percent of the mantle thickness. We have developed a two-dimensional FEM code to model thermochemical convection in a hollow cylinder domain, with a depth- and temperature-dependent viscosity representative of the mantle (Steinberger and Calderwood, 2006). We use marker-in-cell method for advection of chemical and thermal fields. The main advantage of perfoming advection using markers is absence of numerical diffusion during the advection step, as opposed to the more diffusive field-methods. However, in the common implementation of the marker-methods, the solution of the momentum and energy equations takes place on a computational grid, and nodes do not generally coincide with the positions of the markers. Transferring velocity-, temperature-, and chemistry- information between nodes and markers introduces errors inherent to inter- and extrapolation. In the numerical scheme

  1. A scalable and accurate method for classifying protein-ligand binding geometries using a MapReduce approach.

    PubMed

    Estrada, T; Zhang, B; Cicotti, P; Armen, R S; Taufer, M

    2012-07-01

    We present a scalable and accurate method for classifying protein-ligand binding geometries in molecular docking. Our method is a three-step process: the first step encodes the geometry of a three-dimensional (3D) ligand conformation into a single 3D point in the space; the second step builds an octree by assigning an octant identifier to every single point in the space under consideration; and the third step performs an octree-based clustering on the reduced conformation space and identifies the most dense octant. We adapt our method for MapReduce and implement it in Hadoop. The load-balancing, fault-tolerance, and scalability in MapReduce allow screening of very large conformation spaces not approachable with traditional clustering methods. We analyze results for docking trials for 23 protein-ligand complexes for HIV protease, 21 protein-ligand complexes for Trypsin, and 12 protein-ligand complexes for P38alpha kinase. We also analyze cross docking trials for 24 ligands, each docking into 24 protein conformations of the HIV protease, and receptor ensemble docking trials for 24 ligands, each docking in a pool of HIV protease receptors. Our method demonstrates significant improvement over energy-only scoring for the accurate identification of native ligand geometries in all these docking assessments. The advantages of our clustering approach make it attractive for complex applications in real-world drug design efforts. We demonstrate that our method is particularly useful for clustering docking results using a minimal ensemble of representative protein conformational states (receptor ensemble docking), which is now a common strategy to address protein flexibility in molecular docking. PMID:22658682

  2. Accurate semi analytical model of an optical fiber having Kerr nonlinearity using a robust nonlinear unconstrained optimization method

    NASA Astrophysics Data System (ADS)

    RoyChoudhury, Raja; RoyChoudhury, Arundhati

    2011-02-01

    This paper presents a semi analytical formulation of modal properties of a non linear optical fiber having Kerr non linearity with a three parameter approximation of fundamental modal field. The minimization of core parameter ( U) which involves Kerr nonlinearity through the non-stationary expression of propagation constant, is carried out by Nelder-Mead Simplex method of non linear unconstrained minimization, suitable for problems with non-smooth functions as the method does not require any derivative information. Use of three parameters in modal approximation and implementation of Simplex methods enables our semi analytical description to be an alternative way having less computational burden for calculation of modal parameters than full numerical methods.

  3. Numerical method to digital photoelasticity using plane polariscope.

    PubMed

    Júnior, P A A Magalhães; Vieira, F G; Magalhães, C A; Ribeiro, J S; Rios, I G

    2016-06-13

    This research aims to find a new way to get the intensity equations for the phase-shifting model in digital photoelasticity. The procedure is based on the rotation of the analyzer itself. From the intensity equations, the isoclinic and isochromatic equations parameters are deduced by applying a new numerical technique. This approach can be done to calculate how many images allow the resolution of the polariscope. Each image indicates the stress forces in the object. In this study the plane polariscope was used. The amount of images will determinate the number of errors and uncertainties of the study, due to the observation that the veracity of the equations increases considerably with a large amout of images. Several analyses are performed with different amounts of photographic images. The results showed the possibility to measure stress forces with high precision using plane polariscopes. PMID:27410283

  4. Analysis of free turbulent shear flows by numerical methods

    NASA Technical Reports Server (NTRS)

    Korst, H. H.; Chow, W. L.; Hurt, R. F.; White, R. A.; Addy, A. L.

    1973-01-01

    Studies are described in which the effort was essentially directed to classes of problems where the phenomenologically interpreted effective transport coefficients could be absorbed by, and subsequently extracted from (by comparison with experimental data), appropriate coordinate transformations. The transformed system of differential equations could then be solved without further specifications or assumptions by numerical integration procedures. An attempt was made to delineate different regimes for which specific eddy viscosity models could be formulated. In particular, this would account for the carryover of turbulence from attached boundary layers, the transitory adjustment, and the asymptotic behavior of initially disturbed mixing regions. Such models were subsequently used in seeking solutions for the prescribed two-dimensional test cases, yielding a better insight into overall aspects of the exchange mechanisms.

  5. MODELING COLLISIONAL CASCADES IN DEBRIS DISKS: THE NUMERICAL METHOD

    SciTech Connect

    Gaspar, Andras; Psaltis, Dimitrios; Oezel, Feryal; Rieke, George H.; Cooney, Alan E-mail: dpsaltis@as.arizona.edu E-mail: grieke@as.arizona.edu

    2012-04-10

    We develop a new numerical algorithm to model collisional cascades in debris disks. Because of the large dynamical range in particle masses, we solve the integro-differential equations describing erosive and catastrophic collisions in a particle-in-a-box approach, while treating the orbital dynamics of the particles in an approximate fashion. We employ a new scheme for describing erosive (cratering) collisions that yields a continuous set of outcomes as a function of colliding masses. We demonstrate the stability and convergence characteristics of our algorithm and compare it with other treatments. We show that incorporating the effects of erosive collisions results in a decay of the particle distribution that is significantly faster than with purely catastrophic collisions.

  6. A method for accurate zero calibration of asymmetric jaws in single-isocenter half-beam techniques

    SciTech Connect

    Hernandez, V.; Abella, R.; Lopez, M.; Perez, M.; Artigues, M.; Sempau, J.; Arenas, M.

    2013-02-15

    Purpose: To present a practical method for calibrating the zero position of asymmetric jaws that provides higher accuracy at the central axis and improves dose homogeneity in the abutting region of half-beams. Methods: Junction doses were measured for each asymmetric jaw using the double-exposure technique and electronic portal imaging devices. The junction dose was determined as a function of jaw position. The shift in the zero jaw position (or in its corresponding potentiometer readout) required to correct for the measured junction dose could thus be obtained. The jaw calibration was then modified to introduce the calculated shift and therefore achieve an accurate zero position in order to provide a relative junction dose that was as close to zero as possible. Results: All the asymmetric jaws from four medical linear accelerators were calibrated with the new calibration procedure. Measured relative junction doses at gantry 0 Degree-Sign were reduced from a maximum of {+-}40% to a maximum of {+-}8% for all the jaws in the four considered accelerators. These results were valid for 6 MV and 18 MV photon beams and for any combination of asymmetric jaws set to zero. The calibration was stable over a long period of time; therefore, the need for recalibrating is seldom necessary. Conclusions: Accurate calibration of the zero position of the jaws is feasible in current medical linear accelerators. The proposed procedure is fast and it improves dose homogeneity at the junction of half-beams, thus, allowing a more accurate and safer use of these techniques.

  7. On a New Numerical Method for Solving General Variational Inequalities

    NASA Astrophysics Data System (ADS)

    Bnouhachem, Abdellah; Noor, Muhammad Aslam; Khalfaoui, Mohamed; Sheng, Zhaohan

    In this paper, we suggest and analyze a new extragradient method for solving the general variational inequalities involving two operators. We also prove the global convergence of the proposed modified method under certain mild conditions. We used a self-adaptive technique to adjust parameter ρ at each iteration. It is proved theoretically that the lower-bound of the progress obtained by the proposed method is greater than that by the extragradient method. An example is given to illustrate the efficiency and its comparison with the extragradient method. Since the general variational inequalities include the classical variational inequalities and complementarity problems as special cases, our results obtained in this paper continue to hold for these problems. Results obtained in this paper may be viewed as an improvement and refinement of the previously known results in this field.

  8. Toward accurate molecular identification of species in complex environmental samples: testing the performance of sequence filtering and clustering methods

    PubMed Central

    Flynn, Jullien M; Brown, Emily A; Chain, Frédéric J J; MacIsaac, Hugh J; Cristescu, Melania E

    2015-01-01

    Metabarcoding has the potential to become a rapid, sensitive, and effective approach for identifying species in complex environmental samples. Accurate molecular identification of species depends on the ability to generate operational taxonomic units (OTUs) that correspond to biological species. Due to the sometimes enormous estimates of biodiversity using this method, there is a great need to test the efficacy of data analysis methods used to derive OTUs. Here, we evaluate the performance of various methods for clustering length variable 18S amplicons from complex samples into OTUs using a mock community and a natural community of zooplankton species. We compare analytic procedures consisting of a combination of (1) stringent and relaxed data filtering, (2) singleton sequences included and removed, (3) three commonly used clustering algorithms (mothur, UCLUST, and UPARSE), and (4) three methods of treating alignment gaps when calculating sequence divergence. Depending on the combination of methods used, the number of OTUs varied by nearly two orders of magnitude for the mock community (60–5068 OTUs) and three orders of magnitude for the natural community (22–22191 OTUs). The use of relaxed filtering and the inclusion of singletons greatly inflated OTU numbers without increasing the ability to recover species. Our results also suggest that the method used to treat gaps when calculating sequence divergence can have a great impact on the number of OTUs. Our findings are particularly relevant to studies that cover taxonomically diverse species and employ markers such as rRNA genes in which length variation is extensive. PMID:26078860

  9. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate

    PubMed Central

    Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul

    2015-01-01

    In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821

  10. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate.

    PubMed

    Minyoo, Abel B; Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul; Lankester, Felix

    2015-12-01

    In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821

  11. A numerical method for eigenvalue problems in modeling liquid crystals

    SciTech Connect

    Baglama, J.; Farrell, P.A.; Reichel, L.; Ruttan, A.; Calvetti, D.

    1996-12-31

    Equilibrium configurations of liquid crystals in finite containments are minimizers of the thermodynamic free energy of the system. It is important to be able to track the equilibrium configurations as the temperature of the liquid crystals decreases. The path of the minimal energy configuration at bifurcation points can be computed from the null space of a large sparse symmetric matrix. We describe a new variant of the implicitly restarted Lanczos method that is well suited for the computation of extreme eigenvalues of a large sparse symmetric matrix, and we use this method to determine the desired null space. Our implicitly restarted Lanczos method determines adoptively a polynomial filter by using Leja shifts, and does not require factorization of the matrix. The storage requirement of the method is small, and this makes it attractive to use for the present application.

  12. Numerical Stability and Convergence of Approximate Methods for Conservation Laws

    NASA Astrophysics Data System (ADS)

    Galkin, V. A.

    We present the new approach to background of approximate methods convergence based on functional solutions theory for conservation laws. The applications to physical kinetics, gas and fluid dynamics are considered.

  13. A numerical method for solving the 3D unsteady incompressible Navier Stokes equations in curvilinear domains with complex immersed boundaries

    NASA Astrophysics Data System (ADS)

    Ge, Liang; Sotiropoulos, Fotis

    2007-08-01

    A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g. the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [A. Gilmanov, F. Sotiropoulos, A hybrid cartesian/immersed boundary method for simulating flows with 3d, geometrically complex, moving bodies, Journal of Computational Physics 207 (2005) 457-492.]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow

  14. Numerical Solutions of Electromagnetic Problems by Integral Equation Methods and Finite-Difference Time - Method.

    NASA Astrophysics Data System (ADS)

    Min, Xiaoyi

    This thesis first presents the study of the interaction of electromagnetic waves with three-dimensional heterogeneous, dielectric, magnetic, and lossy bodies by surface integral equation modeling. Based on the equivalence principle, a set of coupled surface integral equations is formulated and then solved numerically by the method of moments. Triangular elements are used to model the interfaces of the heterogeneous body, and vector basis functions are defined to expand the unknown current in the formulation. The validity of this formulation is verified by applying it to concentric spheres for which an exact solution exists. The potential applications of this formulation to a partially coated sphere and a homogeneous human body are discussed. Next, this thesis also introduces an efficient new set of integral equations for treating the scattering problem of a perfectly conducting body coated with a thin magnetically lossy layer. These electric field integral equations and magnetic field integral equations are numerically solved by the method of moments (MoM). To validate the derived integral equations, an alternative method to solve the scattering problem of an infinite circular cylinder coated with a thin magnetic lossy layer has also been developed, based on the eigenmode expansion. Results for the radar cross section and current densities via the MoM and the eigenmode expansion method are compared. The agreement is excellent. The finite difference time domain method is subsequently implemented to solve a metallic object coated with a magnetic thin layer and numerical results are compared with that by the MoM. Finally, this thesis presents an application of the finite-difference time-domain approach to the problem of electromagnetic receiving and scattering by a cavity -backed antenna situated on an infinite conducting plane. This application involves modifications of Yee's model, which applies the difference approximations of field derivatives to differential

  15. Use of an Accurate DNS Particulate Flow Method to Supply and Validate Boundary Conditions for the MFIX Code

    SciTech Connect

    Zhi-Gang Feng

    2012-05-31

    The simulation of particulate flows for industrial applications often requires the use of two-fluid models, where the solid particles are considered as a separate continuous phase. One of the underlining uncertainties in the use of the two-fluid models in multiphase computations comes from the boundary condition of the solid phase. Typically, the gas or liquid fluid boundary condition at a solid wall is the so called no-slip condition, which has been widely accepted to be valid for single-phase fluid dynamics provided that the Knudsen number is low. However, the boundary condition for the solid phase is not well understood. The no-slip condition at a solid boundary is not a valid assumption for the solid phase. Instead, several researchers advocate a slip condition as a more appropriate boundary condition. However, the question on the selection of an exact slip length or a slip velocity coefficient is still unanswered. Experimental or numerical simulation data are needed in order to determinate the slip boundary condition that is applicable to a two-fluid model. The goal of this project is to improve the performance and accuracy of the boundary conditions used in two-fluid models such as the MFIX code, which is frequently used in multiphase flow simulations. The specific objectives of the project are to use first principles embedded in a validated Direct Numerical Simulation particulate flow numerical program, which uses the Immersed Boundary method (DNS-IB) and the Direct Forcing scheme in order to establish, modify and validate needed energy and momentum boundary conditions for the MFIX code. To achieve these objectives, we have developed a highly efficient DNS code and conducted numerical simulations to investigate the particle-wall and particle-particle interactions in particulate flows. Most of our research findings have been reported in major conferences and archived journals, which are listed in Section 7 of this report. In this report, we will present a

  16. Models and numerical methods for the simulation of loss-of-coolant accidents in nuclear reactors

    NASA Astrophysics Data System (ADS)

    Seguin, Nicolas

    2014-05-01

    In view of the simulation of the water flows in pressurized water reactors (PWR), many models are available in the literature and their complexity deeply depends on the required accuracy, see for instance [1]. The loss-of-coolant accident (LOCA) may appear when a pipe is broken through. The coolant is composed by light water in its liquid form at very high temperature and pressure (around 300 °C and 155 bar), it then flashes and becomes instantaneously vapor in case of LOCA. A front of liquid/vapor phase transition appears in the pipes and may propagate towards the critical parts of the PWR. It is crucial to propose accurate models for the whole phenomenon, but also sufficiently robust to obtain relevant numerical results. Due to the application we have in mind, a complete description of the two-phase flow (with all the bubbles, droplets, interfaces…) is out of reach and irrelevant. We investigate averaged models, based on the use of void fractions for each phase, which represent the probability of presence of a phase at a given position and at a given time. The most accurate averaged model, based on the so-called Baer-Nunziato model, describes separately each phase by its own density, velocity and pressure. The two phases are coupled by non-conservative terms due to gradients of the void fractions and by source terms for mechanical relaxation, drag force and mass transfer. With appropriate closure laws, it has been proved [2] that this model complies with all the expected physical requirements: positivity of densities and temperatures, maximum principle for the void fraction, conservation of the mixture quantities, decrease of the global entropy… On the basis of this model, it is possible to derive simpler models, which can be used where the flow is still, see [3]. From the numerical point of view, we develop new Finite Volume schemes in [4], which also satisfy the requirements mentioned above. Since they are based on a partial linearization of the physical

  17. Validation of a Numerical Method for Determining Liner Impedance

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.; Jones, Michael G.; Tanner, Sharon E.; Parrott, Tony L.

    1996-01-01

    This paper reports the initial results of a test series to evaluate a method for determining the normal incidence impedance of a locally reacting acoustically absorbing liner, located on the lower wall of a duct in a grazing incidence, multi-modal, non-progressive acoustic wave environment without flow. This initial evaluation is accomplished by testing the methods' ability to converge to the known normal incidence impedance of a solid steel plate, and to the normal incidence impedance of an absorbing test specimen whose impedance was measured in a conventional normal incidence tube. The method is shown to converge to the normal incident impedance values and thus to be an adequate tool for determining the impedance of specimens in a grazing incidence, multi-modal, nonprogressive acoustic wave environment for a broad range of source frequencies.

  18. Improved numerical methods for turbulent viscous recirculating flows

    NASA Technical Reports Server (NTRS)

    Turan, A.; Vandoormaal, J. P.

    1988-01-01

    The performance of discrete methods for the prediction of fluid flows can be enhanced by improving the convergence rate of solvers and by increasing the accuracy of the discrete representation of the equations of motion. This report evaluates the gains in solver performance that are available when various acceleration methods are applied. Various discretizations are also examined and two are recommended because of their accuracy and robustness. Insertion of the improved discretization and solver accelerator into a TEACH mode, that has been widely applied to combustor flows, illustrates the substantial gains to be achieved.

  19. Numerical solution of 2D-vector tomography problem using the method of approximate inverse

    NASA Astrophysics Data System (ADS)

    Svetov, Ivan; Maltseva, Svetlana; Polyakova, Anna

    2016-08-01

    We propose a numerical solution of reconstruction problem of a two-dimensional vector field in a unit disk from the known values of the longitudinal and transverse ray transforms. The algorithm is based on the method of approximate inverse. Numerical simulations confirm that the proposed method yields good results of reconstruction of vector fields.

  20. Eulerian-Lagrangian numerical scheme for simulating advection, dispersion, and transient storage in streams and a comparison of numerical methods

    USGS Publications Warehouse

    Cox, T.J.; Runkel, R.L.

    2008-01-01

    Past applications of one-dimensional advection, dispersion, and transient storage zone models have almost exclusively relied on a central differencing, Eulerian numerical approximation to the nonconservative form of the fundamental equation. However, there are scenarios where this approach generates unacceptable error. A new numerical scheme for this type of modeling is presented here that is based on tracking Lagrangian control volumes across a fixed (Eulerian) grid. Numerical tests are used to provide a direct comparison of the new scheme versus nonconservative Eulerian numerical methods, in terms of both accuracy and mass conservation. Key characteristics of systems for which the Lagrangian scheme performs better than the Eulerian scheme include: nonuniform flow fields, steep gradient plume fronts, and pulse and steady point source loadings in advection-dominated systems. A new analytical derivation is presented that provides insight into the loss of mass conservation in the nonconservative Eulerian scheme. This derivation shows that loss of mass conservation in the vicinity of spatial flow changes is directly proportional to the lateral inflow rate and the change in stream concentration due to the inflow. While the nonconservative Eulerian scheme has clearly worked well for past published applications, it is important for users to be aware of the scheme's limitations. ?? 2008 ASCE.

  1. A feasibility study of UHPLC-HRMS accurate-mass screening methods for multiclass testing of organic contaminants in food.

    PubMed

    Pérez-Ortega, Patricia; Lara-Ortega, Felipe J; García-Reyes, Juan F; Gilbert-López, Bienvenida; Trojanowicz, Marek; Molina-Díaz, Antonio

    2016-11-01

    The feasibility of accurate-mass multi-residue screening methods using liquid chromatography high-resolution mass spectrometry (UHPLC-HRMS) using time-of-flight mass spectrometry has been evaluated, including over 625 multiclass food contaminants as case study. Aspects such as the selectivity and confirmation capability provided by HRMS with different acquisition modes (full-scan or full-scan combined with collision induced dissociation (CID) with no precursor ion isolation), and chromatographic separation along with main limitations such as sensitivity or automated data processing have been examined. Compound identification was accomplished with retention time matching and accurate mass measurements of the targeted ions for each analyte (mainly (de)protonated molecules). Compounds with the same nominal mass (isobaric species) were very frequent due to the large number of compounds included. Although 76% of database compounds were involved in isobaric groups, they were resolved in most cases (99% of these isobaric species were distinguished by retention time, resolving power, isotopic profile or fragment ions). Only three pairs could not be resolved with these tools. In-source CID fragmentation was evaluated in depth, although the results obtained in terms of information provided were not as thorough as those obtained using fragmentation experiments without precursor ion isolation (all ion mode). The latter acquisition mode was found to be the best suited for this type of large-scale screening method instead of classic product ion scan, as provided excellent fragmentation information for confirmatory purposes for an unlimited number of compounds. Leaving aside the sample treatment limitations, the main weaknesses noticed are basically the relatively low sensitivity for compounds which does not map well against electrospray ionization and also quantitation issues such as those produced by signal suppression due to either matrix effects from coeluting matrix or from

  2. Advancing Efficient All-Electron Electronic Structure Methods Based on Numeric Atom-Centered Orbitals for Energy Related Materials

    NASA Astrophysics Data System (ADS)

    Blum, Volker

    This talk describes recent advances of a general, efficient, accurate all-electron electronic theory approach based on numeric atom-centered orbitals; emphasis is placed on developments related to materials for energy conversion and their discovery. For total energies and electron band structures, we show that the overall accuracy is on par with the best benchmark quality codes for materials, but scalable to large system sizes (1,000s of atoms) and amenable to both periodic and non-periodic simulations. A recent localized resolution-of-identity approach for the Coulomb operator enables O (N) hybrid functional based descriptions of the electronic structure of non-periodic and periodic systems, shown for supercell sizes up to 1,000 atoms; the same approach yields accurate results for many-body perturbation theory as well. For molecular systems, we also show how many-body perturbation theory for charged and neutral quasiparticle excitation energies can be efficiently yet accurately applied using basis sets of computationally manageable size. Finally, the talk highlights applications to the electronic structure of hybrid organic-inorganic perovskite materials, as well as to graphene-based substrates for possible future transition metal compound based electrocatalyst materials. All methods described here are part of the FHI-aims code. VB gratefully acknowledges contributions by numerous collaborators at Duke University, Fritz Haber Institute Berlin, TU Munich, USTC Hefei, Aalto University, and many others around the globe.

  3. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    PubMed

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. PMID:22376200

  4. A finite volume method for numerical grid generation

    NASA Astrophysics Data System (ADS)

    Beale, S. B.

    1999-07-01

    A novel method to generate body-fitted grids based on the direct solution for three scalar functions is derived. The solution for scalar variables , and is obtained with a conventional finite volume method based on a physical space formulation. The grid is adapted or re-zoned to eliminate the residual error between the current solution and the desired solution, by means of an implicit grid-correction procedure. The scalar variables are re-mapped and the process is reiterated until convergence is obtained. Calculations are performed for a variety of problems by assuming combined Dirichlet-Neumann and pure Dirichlet boundary conditions involving the use of transcendental control functions, as well as functions designed to effect grid control automatically on the basis of boundary values. The use of dimensional analysis to build stable exponential functions and other control functions is demonstrated. Automatic procedures are implemented: one based on a finite difference approximation to the Cristoffel terms assuming local-boundary orthogonality, and another designed to procure boundary orthogonality. The performance of the new scheme is shown to be comparable with that of conventional inverse methods when calculations are performed on benchmark problems through the application of point-by-point and whole-field solution schemes. Advantages and disadvantages of the present method are critically appraised. Copyright

  5. Evaluating numerical ODE/DAE methods, algorithms and software

    NASA Astrophysics Data System (ADS)

    Soderlind, Gustaf; Wang, Lina

    2006-01-01

    Until recently, the testing of ODE/DAE software has been limited to simple comparisons and benchmarking. The process of developing software from a mathematically specified method is complex: it entails constructing control structures and objectives, selecting iterative methods and termination criteria, choosing norms and many more decisions. Most software constructors have taken a heuristic approach to these design choices, and as a consequence two different implementations of the same method may show significant differences in performance. Yet it is common to try to deduce from software comparisons that one method is better than another. Such conclusions are not warranted, however, unless the testing is carried out under true ceteris paribus conditions. Moreover, testing is an empirical science and as such requires a formal test protocol; without it conclusions are questionable, invalid or even false.We argue that ODE/DAE software can be constructed and analyzed by proven, "standard" scientific techniques instead of heuristics. The goals are computational stability, reproducibility, and improved software quality. We also focus on different error criteria and norms, and discuss modifications to DASPK and RADAU5. Finally, some basic principles of a test protocol are outlined and applied to testing these codes on a variety of problems.

  6. SU-F-BRF-09: A Non-Rigid Point Matching Method for Accurate Bladder Dose Summation in Cervical Cancer HDR Brachytherapy

    SciTech Connect

    Chen, H; Zhen, X; Zhou, L; Zhong, Z; Pompos, A; Yan, H; Jiang, S; Gu, X

    2014-06-15

    Purpose: To propose and validate a deformable point matching scheme for surface deformation to facilitate accurate bladder dose summation for fractionated HDR cervical cancer treatment. Method: A deformable point matching scheme based on the thin plate spline robust point matching (TPSRPM) algorithm is proposed for bladder surface registration. The surface of bladders segmented from fractional CT images is extracted and discretized with triangular surface mesh. Deformation between the two bladder surfaces are obtained by matching the two meshes' vertices via the TPS-RPM algorithm, and the deformation vector fields (DVFs) characteristic of this deformation is estimated by B-spline approximation. Numerically, the algorithm is quantitatively compared with the Demons algorithm using five clinical cervical cancer cases by several metrics: vertex-to-vertex distance (VVD), Hausdorff distance (HD), percent error (PE), and conformity index (CI). Experimentally, the algorithm is validated on a balloon phantom with 12 surface fiducial markers. The balloon is inflated with different amount of water, and the displacement of fiducial markers is benchmarked as ground truth to study TPS-RPM calculated DVFs' accuracy. Results: In numerical evaluation, the mean VVD is 3.7(±2.0) mm after Demons, and 1.3(±0.9) mm after TPS-RPM. The mean HD is 14.4 mm after Demons, and 5.3mm after TPS-RPM. The mean PE is 101.7% after Demons and decreases to 18.7% after TPS-RPM. The mean CI is 0.63 after Demons, and increases to 0.90 after TPS-RPM. In the phantom study, the mean Euclidean distance of the fiducials is 7.4±3.0mm and 4.2±1.8mm after Demons and TPS-RPM, respectively. Conclusions: The bladder wall deformation is more accurate using the feature-based TPS-RPM algorithm than the intensity-based Demons algorithm, indicating that TPS-RPM has the potential for accurate bladder dose deformation and dose summation for multi-fractional cervical HDR brachytherapy. This work is supported in part by

  7. Applications of numerical methods to simulate the movement of contaminants in groundwater.

    PubMed Central

    Sun, N Z

    1989-01-01

    This paper reviews mathematical models and numerical methods that have been extensively used to simulate the movement of contaminants through the subsurface. The major emphasis is placed on the numerical methods of advection-dominated transport problems and inverse problems. Several mathematical models that are commonly used in field problems are listed. A variety of numerical solutions for three-dimensional models are introduced, including the multiple cell balance method that can be considered a variation of the finite element method. The multiple cell balance method is easy to understand and convenient for solving field problems. When the advection transport dominates the dispersion transport, two kinds of numerical difficulties, overshoot and numerical dispersion, are always involved in solving standard, finite difference methods and finite element methods. To overcome these numerical difficulties, various numerical techniques are developed, such as upstream weighting methods and moving point methods. A complete review of these methods is given and we also mention the problems of parameter identification, reliability analysis, and optimal-experiment design that are absolutely necessary for constructing a practical model. PMID:2695327

  8. Dynamic interaction numerical models in the time domain based on the high performance scaled boundary finite element method

    NASA Astrophysics Data System (ADS)

    Li, Jianbo; Liu, Jun; Lin, Gao

    2013-12-01

    Consideration of structure-foundation-soil dynamic interaction is a basic requirement in the evaluation of the seismic safety of nuclear power facilities. An efficient and accurate dynamic interaction numerical model in the time domain has become an important topic of current research. In this study, the scaled boundary finite element method (SBFEM) is improved for use as an effective numerical approach with good application prospects. This method has several advantages, including dimensionality reduction, accuracy of the radial analytical solution, and unlike other boundary element methods, it does not require a fundamental solution. This study focuses on establishing a high performance scaled boundary finite element interaction analysis model in the time domain based on the acceleration unit-impulse response matrix, in which several new solution techniques, such as a dimensionless method to solve the interaction force, are applied to improve the numerical stability of the actual soil parameters and reduce the amount of calculation. Finally, the feasibility of the time domain methods are illustrated by the response of the nuclear power structure and the accuracy of the algorithms are dynamically verified by comparison with the refinement of a large-scale viscoelastic soil model.

  9. The space-time solution element method: A new numerical approach for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Scott, James R.; Chang, Sin-Chung

    1995-01-01

    This paper is one of a series of papers describing the development of a new numerical method for the Navier-Stokes equations. Unlike conventional numerical methods, the current method concentrates on the discrete simulation of both the integral and differential forms of the Navier-Stokes equations. Conservation of mass, momentum, and energy in space-time is explicitly provided for through a rigorous enforcement of both the integral and differential forms of the governing conservation laws. Using local polynomial expansions to represent the discrete primitive variables on each cell, fluxes at cell interfaces are evaluated and balanced using exact functional expressions. No interpolation or flux limiters are required. Because of the generality of the current method, it applies equally to the steady and unsteady Navier-Stokes equations. In this paper, we generalize and extend the authors' 2-D, steady state implicit scheme. A general closure methodology is presented so that all terms up through a given order in the local expansions may be retained. The scheme is also extended to nonorthogonal Cartesian grids. Numerous flow fields are computed and results are compared with known solutions. The high accuracy of the scheme is demonstrated through its ability to accurately resolve developing boundary layers on coarse grids. Finally, we discuss applications of the current method to the unsteady Navier-Stokes equations.

  10. Numerical design method for thermally loaded plate-cylinder intersections

    SciTech Connect

    Baldur, R.; Laberge, C.A.; Lapointe, D. )

    1988-11-01

    This paper is an extension of work on stresses in corner radii described by the authors previously. Whereas the original study concerned itself with pressure effects only and the second reference gave the initial version of the work dealing with the thermal effects, this report gives more recent results concerning specifically thermal loads. As before, the results are limited to inside corner radii between cylinders and flat heat closures. Similarly, the analysis is based on a systematic series of finite element calculations with the significant parameters covering the field of useful design boundaries. The results are condensed into a rapid method for the determination of peak stresses needed for performing fatigue analysis in pressure vessels subjected to a significant, variable thermal load. The paper takes into account the influence of the film coefficient, temporal temperature variations, and material properties. A set of coefficients provides a convenient method of stress evaluation suitable for design purposes.

  11. Numerical optimization methods for critical currents in superconductors

    NASA Astrophysics Data System (ADS)

    Kimmel, Gregory; Sadovskyy, Ivan; Koshelev, Alex; Glatz, Andreas

    In this work, I present optimization methods for maximizing the critical current in high-temperature superconductors for energy applications. The critical current in the presence of an external magnetic field is mostly defined by the pinning landscape (pinscape) within the superconductor, which prevents magnetic vortices from moving and, therefore, increases its critical current. Our approach is to generate different pinscapes and obtain the resulting critical current by large-scale time-dependent Ginzburg-Landau equations. Pinning centers could be any combination of defects, including spherical and columnar defects. The parameters controlling the pinscape are adaptively adjusted in order to find the optimal parameter set, which maximizes the critical current. Here, we compare different optimization methods and discuss their performance. Work was supported by the Scientific Discovery through Advanced Computing (SciDAC) program funded by U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences.

  12. Extremal polynomials and methods of optimization of numerical algorithms

    SciTech Connect

    Lebedev, V I

    2004-10-31

    Chebyshev-Markov-Bernstein-Szegoe polynomials C{sub n}(x) extremal on [-1,1] with weight functions w(x)=(1+x){sup {alpha}}(1- x){sup {beta}}/{radical}(S{sub l}(x)) where {alpha},{beta}=0,1/2 and S{sub l}(x)={pi}{sub k=1}{sup m}(1-c{sub k}T{sub l{sub k}}(x))>0 are considered. A universal formula for their representation in trigonometric form is presented. Optimal distributions of the nodes of the weighted interpolation and explicit quadrature formulae of Gauss, Markov, Lobatto, and Rado types are obtained for integrals with weight p(x)=w{sup 2}(x)(1-x{sup 2}){sup -1/2}. The parameters of optimal Chebyshev iterative methods reducing the error optimally by comparison with the initial error defined in another norm are determined. For each stage of the Fedorenko-Bakhvalov method iteration parameters are determined which take account of the results of the previous calculations. Chebyshev filters with weight are constructed. Iterative methods of the solution of equations containing compact operators are studied.

  13. Extremal polynomials and methods of optimization of numerical algorithms

    NASA Astrophysics Data System (ADS)

    Lebedev, V. I.

    2004-10-01

    Chebyshëv-Markov-Bernstein-Szegö polynomials C_n(x) extremal on \\lbrack -1,1 \\rbrack with weight functions w(x)=(1+x)^\\alpha(1- x)^\\beta/\\sqrt{S_l(x)} where \\alpha,\\beta=0,\\frac12 and S_l(x)=\\prod_{k=1}^m(1-c_kT_{l_k}(x))>0 are considered. A universal formula for their representation in trigonometric form is presented. Optimal distributions of the nodes of the weighted interpolation and explicit quadrature formulae of Gauss, Markov, Lobatto, and Rado types are obtained for integrals with weight p(x)=w^2(x)(1-x^2)^{-1/2}. The parameters of optimal Chebyshëv iterative methods reducing the error optimally by comparison with the initial error defined in another norm are determined. For each stage of the Fedorenko-Bakhvalov method iteration parameters are determined which take account of the results of the previous calculations. Chebyshëv filters with weight are constructed. Iterative methods of the solution of equations containing compact operators are studied.

  14. Development and evaluation of a liquid chromatography-mass spectrometry method for rapid, accurate quantitation of malondialdehyde in human plasma.

    PubMed

    Sobsey, Constance A; Han, Jun; Lin, Karen; Swardfager, Walter; Levitt, Anthony; Borchers, Christoph H

    2016-09-01

    Malondialdhyde (MDA) is a commonly used marker of lipid peroxidation in oxidative stress. To provide a sensitive analytical method that is compatible with high throughput, we developed a multiple reaction monitoring-mass spectrometry (MRM-MS) approach using 3-nitrophenylhydrazine chemical derivatization, isotope-labeling, and liquid chromatography (LC) with electrospray ionization (ESI)-tandem mass spectrometry assay to accurately quantify MDA in human plasma. A stable isotope-labeled internal standard was used to compensate for ESI matrix effects. The assay is linear (R(2)=0.9999) over a 20,000-fold concentration range with a lower limit of quantitation of 30fmol (on-column). Intra- and inter-run coefficients of variation (CVs) were <2% and ∼10% respectively. The derivative was stable for >36h at 5°C. Standards spiked into plasma had recoveries of 92-98%. When compared to a common LC-UV method, the LC-MS method found near-identical MDA concentrations. A pilot project to quantify MDA in patient plasma samples (n=26) in a study of major depressive disorder with winter-type seasonal pattern (MDD-s) confirmed known associations between MDA concentrations and obesity (p<0.02). The LC-MS method provides high sensitivity and high reproducibility for quantifying MDA in human plasma. The simple sample preparation and rapid analysis time (5x faster than LC-UV) offers high throughput for large-scale clinical applications. PMID:27437618

  15. Scalable implementations of accurate excited-state coupled cluster theories: application of high-level methods to porphyrin based systems

    SciTech Connect

    Kowalski, Karol; Krishnamoorthy, Sriram; Olson, Ryan M.; Tipparaju, Vinod; Apra, Edoardo

    2011-11-30

    The development of reliable tools for excited-state simulations is emerging as an extremely powerful computational chemistry tool for understanding complex processes in the broad class of light harvesting systems and optoelectronic devices. Over the last years we have been developing equation of motion coupled cluster (EOMCC) methods capable of tackling these problems. In this paper we discuss the parallel performance of EOMCC codes which provide accurate description of the excited-state correlation effects. Two aspects are discuss in details: (1) a new algorithm for the iterative EOMCC methods based on the novel task scheduling algorithms, and (2) parallel algorithms for the non-iterative methods describing the effect of triply excited configurations. We demonstrate that the most computationally intensive non-iterative part can take advantage of 210,000 cores of the Cray XT5 system at OLCF. In particular, we demonstrate the importance of non-iterative many-body methods for achieving experimental level of accuracy for several porphyrin-based system.

  16. Accurate, efficient, and scalable parallel simulation of mesoscale electrostatic/magnetostatic problems accelerated by a fast multipole method

    NASA Astrophysics Data System (ADS)

    Jiang, Xikai; Karpeev, Dmitry; Li, Jiyuan; de Pablo, Juan; Hernandez-Ortiz, Juan; Heinonen, Olle

    Boundary integrals arise in many electrostatic and magnetostatic problems. In computational modeling of these problems, although the integral is performed only on the boundary of a domain, its direct evaluation needs O(N2) operations, where N is number of unknowns on the boundary. The O(N2) scaling impedes a wider usage of the boundary integral method in scientific and engineering communities. We have developed a parallel computational approach that utilize the Fast Multipole Method to evaluate the boundary integral in O(N) operations. To demonstrate the accuracy, efficiency, and scalability of our approach, we consider two test cases. In the first case, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space using a hybrid finite element-boundary integral method. In the second case, we solve an electrostatic problem involving the polarization of dielectric objects in free space using the boundary element method. The results from test cases show that our parallel approach can enable highly efficient and accurate simulations of mesoscale electrostatic/magnetostatic problems. Computing resources was provided by Blues, a high-performance cluster operated by the Laboratory Computing Resource Center at Argonne National Laboratory. Work at Argonne was supported by U. S. DOE, Office of Science under Contract No. DE-AC02-06CH11357.

  17. Ion chromatography as highly suitable method for rapid and accurate determination of antibiotic fosfomycin in pharmaceutical wastewater.

    PubMed

    Zeng, Ping; Xie, Xiaolin; Song, Yonghui; Liu, Ruixia; Zhu, Chaowei; Galarneau, Anne; Pic, Jean-Stéphane

    2014-01-01

    A rapid and accurate ion chromatography (IC) method (limit of detection as low as 0.06 mg L(-1)) for fosfomycin concentration determination in pharmaceutical industrial wastewater was developed. This method was compared with the performance of high performance liquid chromatography determination (with a high detection limit of 96.0 mg L(-1)) and ultraviolet spectrometry after reacting with alizarin (difficult to perform in colored solutions). The accuracy of the IC method was established in the linear range of 1.0-15.0 mg L(-1) and a linear correlation was found with a correlation coefficient of 0.9998. The recoveries of fosfomycin from industrial pharmaceutical wastewater at spiking concentrations of 2.0, 5.0 and 8.0 mg L(-1) ranged from 81.91 to 94.74%, with a relative standard deviation (RSD) from 1 to 4%. The recoveries of effluent from a sequencing batch reactor treated fosfomycin with activated sludge at spiking concentrations of 5.0, 8.0, 10.0 mg L(-1) ranging from 98.25 to 99.91%, with a RSD from 1 to 2%. The developed IC procedure provided a rapid, reliable and sensitive method for the determination of fosfomycin concentration in industrial pharmaceutical wastewater and samples containing complex components. PMID:24845315

  18. Numerical simulation of fluid-structure interactions with stabilized finite element method

    NASA Astrophysics Data System (ADS)

    Sváček, Petr

    2016-03-01

    This paper is interested to the interactions of the incompressible flow with a flexibly supported airfoil. The bending and the torsion modes are considered. The problem is mathematically described. The numerical method is based on the finite element method. A combination of the streamline-upwind/Petrov-Galerkin and pressure stabilizing/Petrov-Galerkin method is used for the stabilization of the finite element method. The numerical results for a three-dimensional problem of flow over an airfoil are shown.

  19. Accurate Kohn-Sham ionization potentials from scaled-opposite-spin second-order optimized effective potential methods.

    PubMed

    Śmiga, Szymon; Della Sala, Fabio; Buksztel, Adam; Grabowski, Ireneusz; Fabiano, Eduardo

    2016-08-15

    One important property of Kohn-Sham (KS) density functional theory is the exact equality of the energy of the highest occupied KS orbital (HOMO) with the negative ionization potential of the system. This exact feature is out of reach for standard density-dependent semilocal functionals. Conversely, accurate results can be obtained using orbital-dependent functionals in the optimized effective potential (OEP) approach. In this article, we investigate the performance, in this context, of some advanced OEP methods, with special emphasis on the recently proposed scaled-opposite-spin OEP functional. Moreover, we analyze the impact of the so-called HOMO condition on the final quality of the HOMO energy. Results are compared to reference data obtained at the CCSD(T) level of theory. © 2016 Wiley Periodicals, Inc. PMID:27357413

  20. The multiscale coarse-graining method. XI. Accurate interactions based on the centers of charge of coarse-grained sites

    SciTech Connect

    Cao, Zhen; Voth, Gregory A.

    2015-12-28

    It is essential to be able to systematically construct coarse-grained (CG) models that can efficiently and accurately reproduce key properties of higher-resolution models such as all-atom. To fulfill this goal, a mapping operator is needed to transform the higher-resolution configuration to a CG configuration. Certain mapping operators, however, may lose information related to the underlying electrostatic properties. In this paper, a new mapping operator based on the centers of charge of CG sites is proposed to address this issue. Four example systems are chosen to demonstrate this concept. Within the multiscale coarse-graining framework, CG models that use this mapping operator are found to better reproduce the structural correlations of atomistic models. The present work also demonstrates the flexibility of the mapping operator and the robustness of the force matching method. For instance, important functional groups can be isolated and emphasized in the CG model.